Test Report: Docker_Linux_crio_arm64 16890

                    
                      dc702cb3cbb2bfe371541339d66d19e451f60279:2023-07-17:30187
                    
                

Test fail (7/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 170.91
47 TestErrorSpam/setup 32.64
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 184.13
204 TestMultiNode/serial/PingHostFrom2Pods 5.22
225 TestRunningBinaryUpgrade 69.36
228 TestMissingContainerUpgrade 179.01
240 TestStoppedBinaryUpgrade/Upgrade 69.77
x
+
TestAddons/parallel/Ingress (170.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-966885 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-966885 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-966885 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c6b0ff3e-cdab-4c6a-8252-2d0e3d282741] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c6b0ff3e-cdab-4c6a-8252-2d0e3d282741] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.021517073s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-966885 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.438886767s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-966885 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-966885 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.080488128s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.051636091s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-966885 addons disable ingress-dns --alsologtostderr -v=1: (1.254122211s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-966885 addons disable ingress --alsologtostderr -v=1: (7.770658518s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-966885
helpers_test.go:235: (dbg) docker inspect addons-966885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b",
	        "Created": "2023-07-17T21:03:50.75383371Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:03:51.103956131Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b/hostname",
	        "HostsPath": "/var/lib/docker/containers/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b/hosts",
	        "LogPath": "/var/lib/docker/containers/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b-json.log",
	        "Name": "/addons-966885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-966885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-966885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d50cfa1dc34a2ae5138538eefb3a87da1f36f6e09849205a7d94c6b0c40c61e-init/diff:/var/lib/docker/overlay2/9dd04002488337def4cdbea3f3d72ef7a2164867b83574414c8b40a7e2f88109/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d50cfa1dc34a2ae5138538eefb3a87da1f36f6e09849205a7d94c6b0c40c61e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d50cfa1dc34a2ae5138538eefb3a87da1f36f6e09849205a7d94c6b0c40c61e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d50cfa1dc34a2ae5138538eefb3a87da1f36f6e09849205a7d94c6b0c40c61e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-966885",
	                "Source": "/var/lib/docker/volumes/addons-966885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-966885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-966885",
	                "name.minikube.sigs.k8s.io": "addons-966885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ee2d91277a3a19075f3c2719e615024a1b22701f5b9e2f9150c938b458a9468",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34026"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34023"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ee2d91277a3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-966885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "89bf4ecdccc2",
	                        "addons-966885"
	                    ],
	                    "NetworkID": "eaa4a1f7fdf39bf58877a41da250e2dfc39888a9b09716556744c2b3311a51ae",
	                    "EndpointID": "64efdbd7d8a27cb0c280c146b02462f487451bb392069cb7262341d199d21563",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-966885 -n addons-966885
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-966885 logs -n 25: (1.579330467s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-025848   | jenkins | v1.30.1 | 17 Jul 23 21:02 UTC |                     |
	|         | -p download-only-025848        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-025848   | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC |                     |
	|         | -p download-only-025848        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:03 UTC |
	| delete  | -p download-only-025848        | download-only-025848   | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:03 UTC |
	| delete  | -p download-only-025848        | download-only-025848   | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:03 UTC |
	| start   | --download-only -p             | download-docker-401645 | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC |                     |
	|         | download-docker-401645         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-401645      | download-docker-401645 | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:03 UTC |
	| start   | --download-only -p             | binary-mirror-287357   | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC |                     |
	|         | binary-mirror-287357           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46049         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-287357        | binary-mirror-287357   | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:03 UTC |
	| start   | -p addons-966885               | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC | 17 Jul 23 21:06 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	|         | addons-966885                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	|         | -p addons-966885               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-966885 ip               | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	| addons  | addons-966885 addons disable   | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	|         | addons-966885                  |                        |         |         |                     |                     |
	| addons  | addons-966885 addons           | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC | 17 Jul 23 21:06 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh     | addons-966885 ssh curl -s      | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-966885 addons           | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:07 UTC | 17 Jul 23 21:07 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-966885 addons           | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:07 UTC | 17 Jul 23 21:07 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-966885 ip               | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:09 UTC | 17 Jul 23 21:09 UTC |
	| addons  | addons-966885 addons disable   | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:09 UTC | 17 Jul 23 21:09 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-966885 addons disable   | addons-966885          | jenkins | v1.30.1 | 17 Jul 23 21:09 UTC | 17 Jul 23 21:09 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:03:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:03:27.652204 1136376 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:03:27.652392 1136376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:03:27.652402 1136376 out.go:309] Setting ErrFile to fd 2...
	I0717 21:03:27.652408 1136376 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:03:27.652680 1136376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:03:27.653108 1136376 out.go:303] Setting JSON to false
	I0717 21:03:27.654169 1136376 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20751,"bootTime":1689607057,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:03:27.654246 1136376 start.go:138] virtualization:  
	I0717 21:03:27.656571 1136376 out.go:177] * [addons-966885] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:03:27.658814 1136376 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:03:27.660551 1136376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:03:27.658981 1136376 notify.go:220] Checking for updates...
	I0717 21:03:27.664454 1136376 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:03:27.666556 1136376 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:03:27.668328 1136376 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:03:27.670635 1136376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:03:27.672850 1136376 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:03:27.696541 1136376 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:03:27.696647 1136376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:03:27.785474 1136376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:03:27.77545634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:03:27.785577 1136376 docker.go:294] overlay module found
	I0717 21:03:27.788957 1136376 out.go:177] * Using the docker driver based on user configuration
	I0717 21:03:27.791045 1136376 start.go:298] selected driver: docker
	I0717 21:03:27.791072 1136376 start.go:880] validating driver "docker" against <nil>
	I0717 21:03:27.791085 1136376 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:03:27.791715 1136376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:03:27.859341 1136376 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:03:27.849867543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:03:27.859513 1136376 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:03:27.859772 1136376 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:03:27.862117 1136376 out.go:177] * Using Docker driver with root privileges
	I0717 21:03:27.863944 1136376 cni.go:84] Creating CNI manager for ""
	I0717 21:03:27.863965 1136376 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:03:27.863974 1136376 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:03:27.863985 1136376 start_flags.go:319] config:
	{Name:addons-966885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-966885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:03:27.866499 1136376 out.go:177] * Starting control plane node addons-966885 in cluster addons-966885
	I0717 21:03:27.868375 1136376 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:03:27.869969 1136376 out.go:177] * Pulling base image ...
	I0717 21:03:27.872034 1136376 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:03:27.872089 1136376 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0717 21:03:27.872103 1136376 cache.go:57] Caching tarball of preloaded images
	I0717 21:03:27.872117 1136376 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:03:27.872185 1136376 preload.go:174] Found /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 21:03:27.872195 1136376 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:03:27.872536 1136376 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/config.json ...
	I0717 21:03:27.872566 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/config.json: {Name:mk3ed83b44dd6c257fa54149608f002ba9211ee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:27.888617 1136376 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:03:27.888726 1136376 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:03:27.888744 1136376 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 21:03:27.888749 1136376 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 21:03:27.888756 1136376 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:03:27.888761 1136376 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 21:03:44.040209 1136376 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 21:03:44.040249 1136376 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:03:44.040302 1136376 start.go:365] acquiring machines lock for addons-966885: {Name:mk3fc91af8ba36b4143794ebd08b7e6391a0466d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:03:44.041122 1136376 start.go:369] acquired machines lock for "addons-966885" in 793.78µs
	I0717 21:03:44.041191 1136376 start.go:93] Provisioning new machine with config: &{Name:addons-966885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-966885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:03:44.041293 1136376 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:03:44.043620 1136376 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 21:03:44.043877 1136376 start.go:159] libmachine.API.Create for "addons-966885" (driver="docker")
	I0717 21:03:44.043923 1136376 client.go:168] LocalClient.Create starting
	I0717 21:03:44.044048 1136376 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem
	I0717 21:03:44.437598 1136376 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem
	I0717 21:03:44.604938 1136376 cli_runner.go:164] Run: docker network inspect addons-966885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:03:44.626880 1136376 cli_runner.go:211] docker network inspect addons-966885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:03:44.626958 1136376 network_create.go:281] running [docker network inspect addons-966885] to gather additional debugging logs...
	I0717 21:03:44.626974 1136376 cli_runner.go:164] Run: docker network inspect addons-966885
	W0717 21:03:44.643944 1136376 cli_runner.go:211] docker network inspect addons-966885 returned with exit code 1
	I0717 21:03:44.643978 1136376 network_create.go:284] error running [docker network inspect addons-966885]: docker network inspect addons-966885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-966885 not found
	I0717 21:03:44.643991 1136376 network_create.go:286] output of [docker network inspect addons-966885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-966885 not found
	
	** /stderr **
	I0717 21:03:44.644046 1136376 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:03:44.662907 1136376 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001257eb0}
	I0717 21:03:44.662948 1136376 network_create.go:123] attempt to create docker network addons-966885 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 21:03:44.663007 1136376 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-966885 addons-966885
	I0717 21:03:44.734237 1136376 network_create.go:107] docker network addons-966885 192.168.49.0/24 created
	I0717 21:03:44.734266 1136376 kic.go:117] calculated static IP "192.168.49.2" for the "addons-966885" container
	I0717 21:03:44.734343 1136376 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:03:44.752870 1136376 cli_runner.go:164] Run: docker volume create addons-966885 --label name.minikube.sigs.k8s.io=addons-966885 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:03:44.772170 1136376 oci.go:103] Successfully created a docker volume addons-966885
	I0717 21:03:44.772254 1136376 cli_runner.go:164] Run: docker run --rm --name addons-966885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-966885 --entrypoint /usr/bin/test -v addons-966885:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:03:46.586843 1136376 cli_runner.go:217] Completed: docker run --rm --name addons-966885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-966885 --entrypoint /usr/bin/test -v addons-966885:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.814550551s)
	I0717 21:03:46.586875 1136376 oci.go:107] Successfully prepared a docker volume addons-966885
	I0717 21:03:46.586901 1136376 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:03:46.586921 1136376 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:03:46.587044 1136376 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-966885:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:03:50.671370 1136376 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-966885:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.084268464s)
	I0717 21:03:50.671404 1136376 kic.go:199] duration metric: took 4.084479 seconds to extract preloaded images to volume
	W0717 21:03:50.671545 1136376 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:03:50.671667 1136376 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:03:50.738274 1136376 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-966885 --name addons-966885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-966885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-966885 --network addons-966885 --ip 192.168.49.2 --volume addons-966885:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:03:51.114547 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Running}}
	I0717 21:03:51.137726 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:03:51.162096 1136376 cli_runner.go:164] Run: docker exec addons-966885 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:03:51.239823 1136376 oci.go:144] the created container "addons-966885" has a running status.
	I0717 21:03:51.239867 1136376 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa...
	I0717 21:03:51.596644 1136376 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:03:51.631734 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:03:51.658437 1136376 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:03:51.658455 1136376 kic_runner.go:114] Args: [docker exec --privileged addons-966885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:03:51.739198 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:03:51.773748 1136376 machine.go:88] provisioning docker machine ...
	I0717 21:03:51.773776 1136376 ubuntu.go:169] provisioning hostname "addons-966885"
	I0717 21:03:51.773856 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:51.805899 1136376 main.go:141] libmachine: Using SSH client type: native
	I0717 21:03:51.806354 1136376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34026 <nil> <nil>}
	I0717 21:03:51.806366 1136376 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-966885 && echo "addons-966885" | sudo tee /etc/hostname
	I0717 21:03:51.807002 1136376 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38886->127.0.0.1:34026: read: connection reset by peer
	I0717 21:03:54.951706 1136376 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-966885
	
	I0717 21:03:54.951820 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:54.970686 1136376 main.go:141] libmachine: Using SSH client type: native
	I0717 21:03:54.971137 1136376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34026 <nil> <nil>}
	I0717 21:03:54.971160 1136376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-966885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-966885/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-966885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:03:55.123171 1136376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:03:55.123199 1136376 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:03:55.123223 1136376 ubuntu.go:177] setting up certificates
	I0717 21:03:55.123232 1136376 provision.go:83] configureAuth start
	I0717 21:03:55.123307 1136376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-966885
	I0717 21:03:55.144653 1136376 provision.go:138] copyHostCerts
	I0717 21:03:55.144738 1136376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:03:55.144880 1136376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:03:55.144948 1136376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:03:55.145001 1136376 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.addons-966885 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-966885]
	I0717 21:03:55.618300 1136376 provision.go:172] copyRemoteCerts
	I0717 21:03:55.618391 1136376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:03:55.618438 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:55.636646 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:03:55.732416 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:03:55.762364 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 21:03:55.793339 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 21:03:55.823912 1136376 provision.go:86] duration metric: configureAuth took 700.66728ms
	I0717 21:03:55.823940 1136376 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:03:55.824131 1136376 config.go:182] Loaded profile config "addons-966885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:03:55.824255 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:55.842173 1136376 main.go:141] libmachine: Using SSH client type: native
	I0717 21:03:55.842611 1136376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34026 <nil> <nil>}
	I0717 21:03:55.842633 1136376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:03:56.093373 1136376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:03:56.093399 1136376 machine.go:91] provisioned docker machine in 4.319634191s
	I0717 21:03:56.093408 1136376 client.go:171] LocalClient.Create took 12.049477388s
	I0717 21:03:56.093420 1136376 start.go:167] duration metric: libmachine.API.Create for "addons-966885" took 12.049544186s
	I0717 21:03:56.093428 1136376 start.go:300] post-start starting for "addons-966885" (driver="docker")
	I0717 21:03:56.093440 1136376 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:03:56.093515 1136376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:03:56.093563 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:56.112152 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:03:56.208208 1136376 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:03:56.212449 1136376 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:03:56.212486 1136376 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:03:56.212498 1136376 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:03:56.212510 1136376 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:03:56.212519 1136376 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:03:56.212588 1136376 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:03:56.212615 1136376 start.go:303] post-start completed in 119.181263ms
	I0717 21:03:56.212933 1136376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-966885
	I0717 21:03:56.230438 1136376 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/config.json ...
	I0717 21:03:56.230709 1136376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:03:56.230758 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:56.247860 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:03:56.339623 1136376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:03:56.346394 1136376 start.go:128] duration metric: createHost completed in 12.305085428s
	I0717 21:03:56.346426 1136376 start.go:83] releasing machines lock for "addons-966885", held for 12.305288596s
	I0717 21:03:56.346506 1136376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-966885
	I0717 21:03:56.366111 1136376 ssh_runner.go:195] Run: cat /version.json
	I0717 21:03:56.366139 1136376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:03:56.366161 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:56.366209 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:03:56.388021 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:03:56.397120 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	W0717 21:03:56.477649 1136376 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 21:03:56.477739 1136376 ssh_runner.go:195] Run: systemctl --version
	I0717 21:03:56.617727 1136376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:03:56.767979 1136376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:03:56.773685 1136376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:03:56.799634 1136376 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:03:56.799721 1136376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:03:56.838743 1136376 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:03:56.838781 1136376 start.go:469] detecting cgroup driver to use...
	I0717 21:03:56.838832 1136376 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:03:56.838916 1136376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:03:56.857089 1136376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:03:56.870532 1136376 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:03:56.870597 1136376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:03:56.885944 1136376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:03:56.903990 1136376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:03:57.005991 1136376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:03:57.117829 1136376 docker.go:212] disabling docker service ...
	I0717 21:03:57.117958 1136376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:03:57.140974 1136376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:03:57.154813 1136376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:03:57.251986 1136376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:03:57.357808 1136376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:03:57.371512 1136376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:03:57.391604 1136376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 21:03:57.391673 1136376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:03:57.403229 1136376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:03:57.403305 1136376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:03:57.415124 1136376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:03:57.426801 1136376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:03:57.438679 1136376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:03:57.449332 1136376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:03:57.459176 1136376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:03:57.469291 1136376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:03:57.556031 1136376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:03:57.664521 1136376 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:03:57.664617 1136376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:03:57.669318 1136376 start.go:537] Will wait 60s for crictl version
	I0717 21:03:57.669410 1136376 ssh_runner.go:195] Run: which crictl
	I0717 21:03:57.673735 1136376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:03:57.722597 1136376 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 21:03:57.722717 1136376 ssh_runner.go:195] Run: crio --version
	I0717 21:03:57.767637 1136376 ssh_runner.go:195] Run: crio --version
	I0717 21:03:57.815525 1136376 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 21:03:57.817273 1136376 cli_runner.go:164] Run: docker network inspect addons-966885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:03:57.833656 1136376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 21:03:57.838059 1136376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:03:57.850908 1136376 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:03:57.850973 1136376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:03:57.913646 1136376 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:03:57.913668 1136376 crio.go:415] Images already preloaded, skipping extraction
	I0717 21:03:57.913724 1136376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:03:57.954852 1136376 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:03:57.954870 1136376 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:03:57.954952 1136376 ssh_runner.go:195] Run: crio config
	I0717 21:03:58.012633 1136376 cni.go:84] Creating CNI manager for ""
	I0717 21:03:58.012656 1136376 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:03:58.012668 1136376 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:03:58.012686 1136376 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-966885 NodeName:addons-966885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:03:58.012821 1136376 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-966885"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:03:58.012893 1136376 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-966885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-966885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:03:58.012958 1136376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:03:58.025549 1136376 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:03:58.025661 1136376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:03:58.037627 1136376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0717 21:03:58.060548 1136376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:03:58.083564 1136376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0717 21:03:58.105078 1136376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:03:58.109761 1136376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:03:58.123674 1136376 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885 for IP: 192.168.49.2
	I0717 21:03:58.123705 1136376 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e5c72a7d7e3f9ffe23960b258dcb0da4448fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:58.124285 1136376 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key
	I0717 21:03:58.493022 1136376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt ...
	I0717 21:03:58.493050 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt: {Name:mk14e8c871772266a5e03872b6dba1aacb5c523a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:58.494021 1136376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key ...
	I0717 21:03:58.494043 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key: {Name:mkfbeaaaff8de89df92f9e3aa5d4a074a710364f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:58.494519 1136376 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key
	I0717 21:03:58.753843 1136376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt ...
	I0717 21:03:58.753878 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt: {Name:mk8a7266eee4160e4ea95e435c7d348f7b10ef85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:58.754116 1136376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key ...
	I0717 21:03:58.754132 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key: {Name:mkaa3f1c59a4e552a453bcfeffd22870fdf9c171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:58.754255 1136376 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.key
	I0717 21:03:58.754272 1136376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt with IP's: []
	I0717 21:03:59.235117 1136376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt ...
	I0717 21:03:59.235147 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: {Name:mk71aa5f1002341b9c47d44958f87d0a99f9187d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.235782 1136376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.key ...
	I0717 21:03:59.235798 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.key: {Name:mk4253ce5579112d9f9edc94397b55962c43d3d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.236213 1136376 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key.dd3b5fb2
	I0717 21:03:59.236236 1136376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:03:59.564199 1136376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt.dd3b5fb2 ...
	I0717 21:03:59.564235 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt.dd3b5fb2: {Name:mk8357ae918a4a0940ae167261c5088102928678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.564982 1136376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key.dd3b5fb2 ...
	I0717 21:03:59.565002 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key.dd3b5fb2: {Name:mkc6b34921ee6b019192500fc01b2fdfbf8001ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.565098 1136376 certs.go:337] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt
	I0717 21:03:59.565193 1136376 certs.go:341] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key
	I0717 21:03:59.565246 1136376 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.key
	I0717 21:03:59.565266 1136376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.crt with IP's: []
	I0717 21:03:59.781375 1136376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.crt ...
	I0717 21:03:59.781405 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.crt: {Name:mk9dbaba0e341bb7ef020c77cbbeb24d2e754c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.781590 1136376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.key ...
	I0717 21:03:59.781603 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.key: {Name:mk592ae188e10522e82ffb6ae4117daeb698a918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:03:59.781792 1136376 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:03:59.781835 1136376 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:03:59.781866 1136376 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:03:59.781899 1136376 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem (1675 bytes)
	I0717 21:03:59.782518 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:03:59.812356 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 21:03:59.841373 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:03:59.869870 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 21:03:59.898133 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:03:59.926379 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:03:59.953925 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:03:59.981634 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:04:00.032973 1136376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:04:00.095999 1136376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:04:00.151178 1136376 ssh_runner.go:195] Run: openssl version
	I0717 21:04:00.177974 1136376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:04:00.199414 1136376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:04:00.210222 1136376 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:04:00.210306 1136376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:04:00.222511 1136376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:04:00.240755 1136376 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:04:00.249412 1136376 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:04:00.249465 1136376 kubeadm.go:404] StartCluster: {Name:addons-966885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-966885 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:04:00.249553 1136376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:04:00.249617 1136376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:04:00.314258 1136376 cri.go:89] found id: ""
	I0717 21:04:00.314340 1136376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:04:00.329009 1136376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:04:00.344037 1136376 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 21:04:00.344152 1136376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:04:00.358757 1136376 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:04:00.358824 1136376 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 21:04:00.422666 1136376 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:04:00.423089 1136376 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:04:00.479071 1136376 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:04:00.479197 1136376 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 21:04:00.479246 1136376 kubeadm.go:322] OS: Linux
	I0717 21:04:00.479302 1136376 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 21:04:00.479362 1136376 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 21:04:00.479421 1136376 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 21:04:00.479473 1136376 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 21:04:00.479527 1136376 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 21:04:00.479580 1136376 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 21:04:00.479630 1136376 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 21:04:00.479685 1136376 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 21:04:00.479737 1136376 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 21:04:00.566766 1136376 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:04:00.566880 1136376 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:04:00.566985 1136376 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:04:00.861529 1136376 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:04:00.864215 1136376 out.go:204]   - Generating certificates and keys ...
	I0717 21:04:00.864307 1136376 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:04:00.864380 1136376 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:04:01.106727 1136376 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:04:01.269110 1136376 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:04:01.475364 1136376 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:04:01.786917 1136376 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:04:02.082999 1136376 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:04:02.083159 1136376 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-966885 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:04:02.849943 1136376 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:04:02.850327 1136376 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-966885 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:04:03.236276 1136376 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:04:04.033826 1136376 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:04:04.569209 1136376 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:04:04.569528 1136376 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:04:04.759735 1136376 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:04:05.155797 1136376 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:04:06.225487 1136376 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:04:06.752196 1136376 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:04:06.764040 1136376 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:04:06.767051 1136376 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:04:06.767113 1136376 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:04:06.873646 1136376 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:04:06.877432 1136376 out.go:204]   - Booting up control plane ...
	I0717 21:04:06.877569 1136376 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:04:06.877644 1136376 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:04:06.879067 1136376 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:04:06.880263 1136376 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:04:06.884032 1136376 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:04:14.886658 1136376 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002063 seconds
	I0717 21:04:14.886773 1136376 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:04:14.903989 1136376 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:04:15.429447 1136376 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:04:15.429639 1136376 kubeadm.go:322] [mark-control-plane] Marking the node addons-966885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:04:15.941458 1136376 kubeadm.go:322] [bootstrap-token] Using token: er5nd2.x7lutdzjzznshv69
	I0717 21:04:15.943165 1136376 out.go:204]   - Configuring RBAC rules ...
	I0717 21:04:15.943289 1136376 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:04:15.949317 1136376 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:04:15.957360 1136376 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:04:15.960821 1136376 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:04:15.965017 1136376 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:04:15.971035 1136376 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:04:15.984222 1136376 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:04:16.235345 1136376 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:04:16.378534 1136376 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:04:16.379728 1136376 kubeadm.go:322] 
	I0717 21:04:16.379796 1136376 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:04:16.379807 1136376 kubeadm.go:322] 
	I0717 21:04:16.379880 1136376 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:04:16.379888 1136376 kubeadm.go:322] 
	I0717 21:04:16.379912 1136376 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:04:16.379976 1136376 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:04:16.380028 1136376 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:04:16.380036 1136376 kubeadm.go:322] 
	I0717 21:04:16.380087 1136376 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:04:16.380096 1136376 kubeadm.go:322] 
	I0717 21:04:16.380141 1136376 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:04:16.380149 1136376 kubeadm.go:322] 
	I0717 21:04:16.380198 1136376 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:04:16.380272 1136376 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:04:16.380340 1136376 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:04:16.380350 1136376 kubeadm.go:322] 
	I0717 21:04:16.380430 1136376 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:04:16.380506 1136376 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:04:16.380515 1136376 kubeadm.go:322] 
	I0717 21:04:16.380593 1136376 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token er5nd2.x7lutdzjzznshv69 \
	I0717 21:04:16.380695 1136376 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa \
	I0717 21:04:16.380718 1136376 kubeadm.go:322] 	--control-plane 
	I0717 21:04:16.380726 1136376 kubeadm.go:322] 
	I0717 21:04:16.380812 1136376 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:04:16.380859 1136376 kubeadm.go:322] 
	I0717 21:04:16.380936 1136376 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token er5nd2.x7lutdzjzznshv69 \
	I0717 21:04:16.381036 1136376 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa 
	I0717 21:04:16.384454 1136376 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 21:04:16.384573 1136376 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:04:16.384592 1136376 cni.go:84] Creating CNI manager for ""
	I0717 21:04:16.384604 1136376 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:04:16.386616 1136376 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 21:04:16.388309 1136376 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:04:16.410281 1136376 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 21:04:16.410299 1136376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:04:16.470138 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:04:17.348687 1136376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:04:17.348822 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:17.348900 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=addons-966885 minikube.k8s.io/updated_at=2023_07_17T21_04_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:17.526083 1136376 ops.go:34] apiserver oom_adj: -16
	I0717 21:04:17.526170 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:18.145683 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:18.645297 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:19.146233 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:19.646280 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:20.145316 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:20.645308 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:21.145326 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:21.645417 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:22.146180 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:22.646191 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:23.145302 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:23.645987 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:24.145803 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:24.645390 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:25.145855 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:25.645664 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:26.146181 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:26.645525 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:27.146006 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:27.645344 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:28.145363 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:28.646359 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:29.146004 1136376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:04:29.266937 1136376 kubeadm.go:1081] duration metric: took 11.918165689s to wait for elevateKubeSystemPrivileges.
	I0717 21:04:29.266963 1136376 kubeadm.go:406] StartCluster complete in 29.017501852s
	I0717 21:04:29.266980 1136376 settings.go:142] acquiring lock: {Name:mkf49a04ad0833d4cf5e309fbf4dcc2866032ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:04:29.267110 1136376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:04:29.267489 1136376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/kubeconfig: {Name:mkeb40f750a7362e9193faee51ea6ae2e33e893d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:04:29.268891 1136376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:04:29.269199 1136376 config.go:182] Loaded profile config "addons-966885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:04:29.269339 1136376 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 21:04:29.269430 1136376 addons.go:69] Setting volumesnapshots=true in profile "addons-966885"
	I0717 21:04:29.269447 1136376 addons.go:231] Setting addon volumesnapshots=true in "addons-966885"
	I0717 21:04:29.269501 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.270005 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.271438 1136376 addons.go:69] Setting ingress=true in profile "addons-966885"
	I0717 21:04:29.271460 1136376 addons.go:231] Setting addon ingress=true in "addons-966885"
	I0717 21:04:29.271510 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.271973 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.272057 1136376 addons.go:69] Setting cloud-spanner=true in profile "addons-966885"
	I0717 21:04:29.272071 1136376 addons.go:231] Setting addon cloud-spanner=true in "addons-966885"
	I0717 21:04:29.272104 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.272493 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.272571 1136376 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-966885"
	I0717 21:04:29.272600 1136376 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-966885"
	I0717 21:04:29.272638 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.273036 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.273108 1136376 addons.go:69] Setting default-storageclass=true in profile "addons-966885"
	I0717 21:04:29.273123 1136376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-966885"
	I0717 21:04:29.273396 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.273461 1136376 addons.go:69] Setting gcp-auth=true in profile "addons-966885"
	I0717 21:04:29.273486 1136376 mustload.go:65] Loading cluster: addons-966885
	I0717 21:04:29.273656 1136376 config.go:182] Loaded profile config "addons-966885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:04:29.273881 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.273963 1136376 addons.go:69] Setting metrics-server=true in profile "addons-966885"
	I0717 21:04:29.273975 1136376 addons.go:231] Setting addon metrics-server=true in "addons-966885"
	I0717 21:04:29.274000 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.274388 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.274461 1136376 addons.go:69] Setting ingress-dns=true in profile "addons-966885"
	I0717 21:04:29.274474 1136376 addons.go:231] Setting addon ingress-dns=true in "addons-966885"
	I0717 21:04:29.274504 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.274883 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.274960 1136376 addons.go:69] Setting inspektor-gadget=true in profile "addons-966885"
	I0717 21:04:29.274974 1136376 addons.go:231] Setting addon inspektor-gadget=true in "addons-966885"
	I0717 21:04:29.274996 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.275366 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.275440 1136376 addons.go:69] Setting registry=true in profile "addons-966885"
	I0717 21:04:29.275467 1136376 addons.go:231] Setting addon registry=true in "addons-966885"
	I0717 21:04:29.275493 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.279848 1136376 addons.go:69] Setting storage-provisioner=true in profile "addons-966885"
	I0717 21:04:29.279877 1136376 addons.go:231] Setting addon storage-provisioner=true in "addons-966885"
	I0717 21:04:29.281544 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.282009 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.296114 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.337027 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 21:04:29.338906 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 21:04:29.338932 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 21:04:29.339002 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.448050 1136376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:04:29.450557 1136376 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 21:04:29.452462 1136376 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 21:04:29.452481 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 21:04:29.452544 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.450722 1136376 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:04:29.453219 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:04:29.453284 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.485109 1136376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 21:04:29.489035 1136376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:04:29.490851 1136376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:04:29.493319 1136376 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:04:29.493371 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 21:04:29.493460 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.550606 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.567837 1136376 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 21:04:29.552881 1136376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:04:29.569409 1136376 addons.go:231] Setting addon default-storageclass=true in "addons-966885"
	I0717 21:04:29.573695 1136376 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 21:04:29.572268 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 21:04:29.572277 1136376 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 21:04:29.572281 1136376 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 21:04:29.572329 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:29.576516 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:29.578404 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 21:04:29.576929 1136376 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 21:04:29.580419 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 21:04:29.582207 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.582493 1136376 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:04:29.582504 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 21:04:29.582545 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.582643 1136376 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 21:04:29.582650 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 21:04:29.582678 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.582752 1136376 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 21:04:29.584661 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 21:04:29.584676 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 21:04:29.584727 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.582924 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 21:04:29.587342 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 21:04:29.589521 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 21:04:29.593667 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 21:04:29.596060 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 21:04:29.598013 1136376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 21:04:29.603751 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 21:04:29.603774 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 21:04:29.603836 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.669416 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.683642 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.684503 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.740102 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.789753 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.808930 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.815027 1136376 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:04:29.815048 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:04:29.815131 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:29.823308 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.849915 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.857383 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:29.890145 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:30.119136 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 21:04:30.136390 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:04:30.155312 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:04:30.179618 1136376 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 21:04:30.179694 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 21:04:30.202123 1136376 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 21:04:30.202199 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 21:04:30.215164 1136376 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 21:04:30.215233 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 21:04:30.264451 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:04:30.290210 1136376 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 21:04:30.290278 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 21:04:30.293029 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 21:04:30.293101 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 21:04:30.358100 1136376 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 21:04:30.358159 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 21:04:30.375201 1136376 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:04:30.375230 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 21:04:30.388005 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:04:30.440371 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 21:04:30.440391 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 21:04:30.446139 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 21:04:30.446157 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 21:04:30.468457 1136376 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 21:04:30.468522 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 21:04:30.508012 1136376 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:04:30.508076 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 21:04:30.523678 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:04:30.572844 1136376 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-966885" context rescaled to 1 replicas
	I0717 21:04:30.572886 1136376 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:04:30.576468 1136376 out.go:177] * Verifying Kubernetes components...
	I0717 21:04:30.578549 1136376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:04:30.624316 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 21:04:30.624339 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 21:04:30.631664 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 21:04:30.631690 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 21:04:30.666908 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 21:04:30.666938 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 21:04:30.699485 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:04:30.790290 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 21:04:30.790314 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 21:04:30.815149 1136376 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:04:30.815174 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 21:04:30.828953 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 21:04:30.828978 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 21:04:30.934156 1136376 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 21:04:30.934180 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 21:04:30.936350 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 21:04:30.936370 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 21:04:30.947950 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:04:31.048278 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 21:04:31.048345 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 21:04:31.051515 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 21:04:31.051583 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 21:04:31.147728 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 21:04:31.147796 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 21:04:31.180363 1136376 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:04:31.180433 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 21:04:31.183168 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 21:04:31.183234 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 21:04:31.221272 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:04:31.273347 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 21:04:31.273417 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 21:04:31.378460 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 21:04:31.378527 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 21:04:31.471610 1136376 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:04:31.471678 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 21:04:31.595233 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:04:31.923899 1136376 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.353663687s)
	I0717 21:04:31.923975 1136376 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 21:04:34.048269 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.929044361s)
	I0717 21:04:35.359448 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.204113753s)
	I0717 21:04:35.359613 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.971577953s)
	I0717 21:04:35.359391 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.222921909s)
	I0717 21:04:35.359668 1136376 addons.go:467] Verifying addon ingress=true in "addons-966885"
	I0717 21:04:35.359722 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.836002822s)
	I0717 21:04:35.359752 1136376 addons.go:467] Verifying addon registry=true in "addons-966885"
	I0717 21:04:35.361989 1136376 out.go:177] * Verifying registry addon...
	I0717 21:04:35.359548 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.095037675s)
	I0717 21:04:35.359985 1136376 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.781407211s)
	I0717 21:04:35.360262 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.66074592s)
	I0717 21:04:35.360441 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.13908736s)
	I0717 21:04:35.360384 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.412362529s)
	I0717 21:04:35.363908 1136376 out.go:177] * Verifying ingress addon...
	I0717 21:04:35.364811 1136376 node_ready.go:35] waiting up to 6m0s for node "addons-966885" to be "Ready" ...
	I0717 21:04:35.364940 1136376 addons.go:467] Verifying addon metrics-server=true in "addons-966885"
	W0717 21:04:35.364963 1136376 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:04:35.367914 1136376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 21:04:35.368354 1136376 retry.go:31] will retry after 351.958157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:04:35.371271 1136376 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 21:04:35.378843 1136376 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:04:35.378914 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:35.388623 1136376 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 21:04:35.388692 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:35.623166 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.027830442s)
	I0717 21:04:35.623245 1136376 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-966885"
	I0717 21:04:35.626750 1136376 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 21:04:35.629708 1136376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 21:04:35.640690 1136376 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 21:04:35.640715 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:35.721427 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:04:35.932772 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:35.935008 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:36.162018 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:36.431890 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:36.432780 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:36.682295 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:36.925622 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:37.050044 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:37.053218 1136376 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 21:04:37.053375 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:37.082903 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:37.167303 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:37.289072 1136376 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 21:04:37.376026 1136376 addons.go:231] Setting addon gcp-auth=true in "addons-966885"
	I0717 21:04:37.376087 1136376 host.go:66] Checking if "addons-966885" exists ...
	I0717 21:04:37.376527 1136376 cli_runner.go:164] Run: docker container inspect addons-966885 --format={{.State.Status}}
	I0717 21:04:37.413241 1136376 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 21:04:37.413296 1136376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-966885
	I0717 21:04:37.430384 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:37.434061 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:37.434644 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:37.467704 1136376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34026 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/addons-966885/id_rsa Username:docker}
	I0717 21:04:37.646152 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:37.912982 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:37.913918 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:37.991578 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.270104039s)
	I0717 21:04:37.993488 1136376 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 21:04:37.995631 1136376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:04:37.998325 1136376 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 21:04:37.998354 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 21:04:38.056591 1136376 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 21:04:38.056623 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 21:04:38.115225 1136376 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:04:38.115253 1136376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 21:04:38.145481 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:38.179988 1136376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:04:38.419332 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:38.439005 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:38.663582 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:38.901118 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:38.914153 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:39.147864 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:39.424781 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:39.426102 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:39.672226 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:39.887140 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:39.894757 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:39.897943 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:40.146766 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:40.424283 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:40.425455 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:40.687030 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:40.739042 1136376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.558974757s)
	I0717 21:04:40.741029 1136376 addons.go:467] Verifying addon gcp-auth=true in "addons-966885"
	I0717 21:04:40.743450 1136376 out.go:177] * Verifying gcp-auth addon...
	I0717 21:04:40.746609 1136376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 21:04:40.765552 1136376 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 21:04:40.765610 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:40.884311 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:40.894237 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:41.149578 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:41.270263 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:41.384318 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:41.401231 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:41.651962 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:41.770121 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:41.883539 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:41.900231 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:42.148945 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:42.270769 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:42.384044 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:42.396210 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:42.400431 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:42.645713 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:42.770544 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:42.885043 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:42.900541 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:43.146588 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:43.271782 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:43.387262 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:43.395773 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:43.658807 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:43.770761 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:43.884274 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:43.897118 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:44.146009 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:44.270264 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:44.398213 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:44.404038 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:44.646440 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:44.769837 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:44.883740 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:44.889593 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:44.893744 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:45.161525 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:45.270423 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:45.410452 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:45.414128 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:45.646036 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:45.771015 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:45.884462 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:45.895008 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:46.146517 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:46.270085 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:46.383399 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:46.396668 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:46.647048 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:46.770156 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:46.887065 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:46.894234 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:46.897475 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:47.146712 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:47.271893 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:47.384829 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:47.395753 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:47.645765 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:47.770476 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:47.884136 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:47.893882 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:48.146530 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:48.269144 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:48.383679 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:48.392725 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:48.645462 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:48.769054 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:48.883956 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:48.892860 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:49.147200 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:49.269226 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:49.383488 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:49.388895 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:49.392886 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:49.645250 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:49.770432 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:49.883015 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:49.892502 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:50.146050 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:50.270594 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:50.383690 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:50.392651 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:50.645459 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:50.769134 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:50.884387 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:50.892964 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:51.145681 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:51.270103 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:51.383307 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:51.389412 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:51.393261 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:51.646195 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:51.770140 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:51.884284 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:51.893243 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:52.146476 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:52.269185 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:52.383655 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:52.393484 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:52.645133 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:52.769707 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:52.883118 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:52.892764 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:53.145816 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:53.269772 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:53.383685 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:53.392832 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:53.645439 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:53.769699 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:53.883507 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:53.889929 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:53.893253 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:54.145971 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:54.269737 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:54.383428 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:54.392771 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:54.645488 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:54.769024 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:54.883227 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:54.893690 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:55.146134 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:55.271081 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:55.383312 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:55.392724 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:55.645507 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:55.769025 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:55.883314 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:55.892904 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:56.145623 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:56.269880 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:56.382628 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:56.388412 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:56.392784 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:56.645324 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:56.769658 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:56.884064 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:56.892552 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:57.144875 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:57.269458 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:57.383063 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:57.394267 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:57.645107 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:57.769465 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:57.883494 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:57.892825 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:58.151006 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:58.269906 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:58.383340 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:58.389463 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:04:58.393113 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:58.646078 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:58.769780 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:58.883107 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:58.892614 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:59.145198 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:59.271434 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:59.383505 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:59.393049 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:04:59.645675 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:04:59.771983 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:04:59.883877 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:04:59.893335 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:00.147615 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:00.276444 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:00.384187 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:00.393946 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:00.645516 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:00.769760 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:00.884145 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:00.889798 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:05:00.894013 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:01.147029 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:01.270286 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:01.383267 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:01.393930 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:01.645765 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:01.770143 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:01.884057 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:01.893514 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:02.145763 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:02.269688 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:02.383696 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:02.392900 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:02.645506 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:02.769835 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:02.883845 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:02.892931 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:03.145619 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:03.269553 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:03.383014 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:03.388888 1136376 node_ready.go:58] node "addons-966885" has status "Ready":"False"
	I0717 21:05:03.392788 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:03.645711 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:03.769907 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:03.883037 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:03.894083 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:04.145199 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:04.269908 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:04.385465 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:04.393803 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:04.645565 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:04.782843 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:04.884221 1136376 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:05:04.884241 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:04.896409 1136376 node_ready.go:49] node "addons-966885" has status "Ready":"True"
	I0717 21:05:04.896434 1136376 node_ready.go:38] duration metric: took 29.528396077s waiting for node "addons-966885" to be "Ready" ...
	I0717 21:05:04.896444 1136376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:05:04.897963 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:04.920194 1136376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:05.165195 1136376 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 21:05:05.165223 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:05.276852 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:05.387439 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:05.394909 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:05.684243 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:05.770517 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:05.884381 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:05.894124 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:06.152410 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:06.270791 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:06.383976 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:06.393971 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:06.647912 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:06.775025 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:06.885899 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:06.894592 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:06.941875 1136376 pod_ready.go:102] pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace has status "Ready":"False"
	I0717 21:05:07.148290 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:07.270897 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:07.386180 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:07.397766 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:07.647791 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:07.769916 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:07.884349 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:07.895680 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:08.148534 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:08.269624 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:08.385808 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:08.395029 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:08.649246 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:08.770581 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:08.885492 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:08.894561 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:08.942487 1136376 pod_ready.go:102] pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace has status "Ready":"False"
	I0717 21:05:09.147196 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:09.270280 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:09.401901 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:09.403300 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:09.649431 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:09.770325 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:09.885945 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:09.906544 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:10.151194 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:10.271766 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:10.384600 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:10.398812 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:10.648068 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:10.770463 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:10.884992 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:10.896809 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:11.148384 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:11.269621 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:11.386395 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:11.395641 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:11.438614 1136376 pod_ready.go:102] pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace has status "Ready":"False"
	I0717 21:05:11.646219 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:11.769913 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:11.887332 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:11.894002 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:12.146163 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:12.269467 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:12.384147 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:12.394051 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:12.647505 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:12.770241 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:12.885328 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:12.900163 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:13.148598 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:13.270279 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:13.392763 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:13.401211 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:13.439640 1136376 pod_ready.go:102] pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace has status "Ready":"False"
	I0717 21:05:13.652211 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:13.769604 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:13.883987 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:13.893826 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:14.146536 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:14.270111 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:14.384131 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:14.393988 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:14.647584 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:14.769200 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:14.884351 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:14.893934 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:15.147307 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:15.270540 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:15.385486 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:15.394184 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:15.440286 1136376 pod_ready.go:92] pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.440354 1136376 pod_ready.go:81] duration metric: took 10.520129684s waiting for pod "coredns-5d78c9869d-lnw6x" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.440393 1136376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.449378 1136376 pod_ready.go:92] pod "etcd-addons-966885" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.449435 1136376 pod_ready.go:81] duration metric: took 9.021547ms waiting for pod "etcd-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.449472 1136376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.457338 1136376 pod_ready.go:92] pod "kube-apiserver-addons-966885" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.457408 1136376 pod_ready.go:81] duration metric: took 7.912108ms waiting for pod "kube-apiserver-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.457433 1136376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.464944 1136376 pod_ready.go:92] pod "kube-controller-manager-addons-966885" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.465004 1136376 pod_ready.go:81] duration metric: took 7.548153ms waiting for pod "kube-controller-manager-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.465045 1136376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p6qqp" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.472638 1136376 pod_ready.go:92] pod "kube-proxy-p6qqp" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.472704 1136376 pod_ready.go:81] duration metric: took 7.636809ms waiting for pod "kube-proxy-p6qqp" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.472731 1136376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.647248 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:15.772520 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:15.837502 1136376 pod_ready.go:92] pod "kube-scheduler-addons-966885" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:15.837526 1136376 pod_ready.go:81] duration metric: took 364.774645ms waiting for pod "kube-scheduler-addons-966885" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.837539 1136376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-7jvwv" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:15.889131 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:15.900153 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:16.147966 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:16.270214 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:16.388927 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:16.407988 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:16.647602 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:16.770304 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:16.884241 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:16.894501 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:17.153145 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:17.286873 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:17.385807 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:17.394909 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:17.651618 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:17.777144 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:17.887442 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:17.894186 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:18.151555 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:18.247628 1136376 pod_ready.go:102] pod "metrics-server-844d8db974-7jvwv" in "kube-system" namespace has status "Ready":"False"
	I0717 21:05:18.271022 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:18.426788 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:18.427670 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:18.661141 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:18.772344 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:18.886158 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:18.899185 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:19.171706 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:19.253972 1136376 pod_ready.go:92] pod "metrics-server-844d8db974-7jvwv" in "kube-system" namespace has status "Ready":"True"
	I0717 21:05:19.254042 1136376 pod_ready.go:81] duration metric: took 3.416494409s waiting for pod "metrics-server-844d8db974-7jvwv" in "kube-system" namespace to be "Ready" ...
	I0717 21:05:19.254078 1136376 pod_ready.go:38] duration metric: took 14.357620963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:05:19.254122 1136376 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:05:19.254217 1136376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:05:19.274901 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:19.300111 1136376 api_server.go:72] duration metric: took 48.727194773s to wait for apiserver process to appear ...
	I0717 21:05:19.300181 1136376 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:05:19.300213 1136376 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 21:05:19.339392 1136376 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 21:05:19.343330 1136376 api_server.go:141] control plane version: v1.27.3
	I0717 21:05:19.343401 1136376 api_server.go:131] duration metric: took 43.198474ms to wait for apiserver health ...
	I0717 21:05:19.343439 1136376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:05:19.359677 1136376 system_pods.go:59] 17 kube-system pods found
	I0717 21:05:19.359748 1136376 system_pods.go:61] "coredns-5d78c9869d-lnw6x" [12965bf7-5676-4949-b94f-9bc4f6508f29] Running
	I0717 21:05:19.359771 1136376 system_pods.go:61] "csi-hostpath-attacher-0" [cbb5ef14-3796-422b-ac55-298e6e5ece99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:05:19.359796 1136376 system_pods.go:61] "csi-hostpath-resizer-0" [29c05cfb-e9eb-4449-9103-affd6635982a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:05:19.359841 1136376 system_pods.go:61] "csi-hostpathplugin-d8nsr" [0a86f23f-3fa5-43aa-98aa-93041152d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:05:19.359861 1136376 system_pods.go:61] "etcd-addons-966885" [93cce6f3-6e9b-4a32-96db-d6a453dd8a41] Running
	I0717 21:05:19.359880 1136376 system_pods.go:61] "kindnet-xr4zk" [f8cb38ee-e18c-4405-bbfa-698acbd7fad1] Running
	I0717 21:05:19.359900 1136376 system_pods.go:61] "kube-apiserver-addons-966885" [cd29da95-a50e-4ce8-8d9e-474be00d9276] Running
	I0717 21:05:19.359932 1136376 system_pods.go:61] "kube-controller-manager-addons-966885" [aaf942f7-c6e6-491a-ad5a-c1c520fca243] Running
	I0717 21:05:19.359953 1136376 system_pods.go:61] "kube-ingress-dns-minikube" [fee2cc14-8383-4435-910e-25bbf22dbfb9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 21:05:19.359973 1136376 system_pods.go:61] "kube-proxy-p6qqp" [918a0626-8506-477c-b0c3-9d232a10bcf7] Running
	I0717 21:05:19.359994 1136376 system_pods.go:61] "kube-scheduler-addons-966885" [cf49252b-b2af-4a34-a268-3e073fec4e38] Running
	I0717 21:05:19.360028 1136376 system_pods.go:61] "metrics-server-844d8db974-7jvwv" [f07620db-0f6a-44f1-87ce-68016e67d4b0] Running
	I0717 21:05:19.360050 1136376 system_pods.go:61] "registry-proxy-l2jb4" [64c0d8ab-3b0f-4220-aa8b-e6af17da8a29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:05:19.360075 1136376 system_pods.go:61] "registry-pw2qn" [06dc9e5a-f654-41c5-be3e-33ed763b415d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 21:05:19.360109 1136376 system_pods.go:61] "snapshot-controller-75bbb956b9-hndhm" [4ae86202-5b1a-4a52-aea0-2a1a9c366c90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:05:19.360135 1136376 system_pods.go:61] "snapshot-controller-75bbb956b9-pqhzd" [3d8908dd-a5a5-4f3c-96c7-4bbb7889f146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:05:19.360155 1136376 system_pods.go:61] "storage-provisioner" [bdf5d4e6-158e-422a-bce2-1462c010f5a0] Running
	I0717 21:05:19.360175 1136376 system_pods.go:74] duration metric: took 16.713264ms to wait for pod list to return data ...
	I0717 21:05:19.360194 1136376 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:05:19.378489 1136376 default_sa.go:45] found service account: "default"
	I0717 21:05:19.378516 1136376 default_sa.go:55] duration metric: took 18.291158ms for default service account to be created ...
	I0717 21:05:19.378525 1136376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:05:19.408177 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:19.410020 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:19.417127 1136376 system_pods.go:86] 17 kube-system pods found
	I0717 21:05:19.417208 1136376 system_pods.go:89] "coredns-5d78c9869d-lnw6x" [12965bf7-5676-4949-b94f-9bc4f6508f29] Running
	I0717 21:05:19.417234 1136376 system_pods.go:89] "csi-hostpath-attacher-0" [cbb5ef14-3796-422b-ac55-298e6e5ece99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:05:19.417258 1136376 system_pods.go:89] "csi-hostpath-resizer-0" [29c05cfb-e9eb-4449-9103-affd6635982a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:05:19.417286 1136376 system_pods.go:89] "csi-hostpathplugin-d8nsr" [0a86f23f-3fa5-43aa-98aa-93041152d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:05:19.417319 1136376 system_pods.go:89] "etcd-addons-966885" [93cce6f3-6e9b-4a32-96db-d6a453dd8a41] Running
	I0717 21:05:19.417344 1136376 system_pods.go:89] "kindnet-xr4zk" [f8cb38ee-e18c-4405-bbfa-698acbd7fad1] Running
	I0717 21:05:19.417376 1136376 system_pods.go:89] "kube-apiserver-addons-966885" [cd29da95-a50e-4ce8-8d9e-474be00d9276] Running
	I0717 21:05:19.417397 1136376 system_pods.go:89] "kube-controller-manager-addons-966885" [aaf942f7-c6e6-491a-ad5a-c1c520fca243] Running
	I0717 21:05:19.417430 1136376 system_pods.go:89] "kube-ingress-dns-minikube" [fee2cc14-8383-4435-910e-25bbf22dbfb9] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 21:05:19.417453 1136376 system_pods.go:89] "kube-proxy-p6qqp" [918a0626-8506-477c-b0c3-9d232a10bcf7] Running
	I0717 21:05:19.417476 1136376 system_pods.go:89] "kube-scheduler-addons-966885" [cf49252b-b2af-4a34-a268-3e073fec4e38] Running
	I0717 21:05:19.417497 1136376 system_pods.go:89] "metrics-server-844d8db974-7jvwv" [f07620db-0f6a-44f1-87ce-68016e67d4b0] Running
	I0717 21:05:19.417528 1136376 system_pods.go:89] "registry-proxy-l2jb4" [64c0d8ab-3b0f-4220-aa8b-e6af17da8a29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:05:19.417553 1136376 system_pods.go:89] "registry-pw2qn" [06dc9e5a-f654-41c5-be3e-33ed763b415d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 21:05:19.417577 1136376 system_pods.go:89] "snapshot-controller-75bbb956b9-hndhm" [4ae86202-5b1a-4a52-aea0-2a1a9c366c90] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:05:19.417600 1136376 system_pods.go:89] "snapshot-controller-75bbb956b9-pqhzd" [3d8908dd-a5a5-4f3c-96c7-4bbb7889f146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:05:19.417630 1136376 system_pods.go:89] "storage-provisioner" [bdf5d4e6-158e-422a-bce2-1462c010f5a0] Running
	I0717 21:05:19.417656 1136376 system_pods.go:126] duration metric: took 39.124173ms to wait for k8s-apps to be running ...
	I0717 21:05:19.417678 1136376 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:05:19.417749 1136376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:05:19.432698 1136376 system_svc.go:56] duration metric: took 15.000618ms WaitForService to wait for kubelet.
	I0717 21:05:19.432769 1136376 kubeadm.go:581] duration metric: took 48.859847252s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:05:19.432809 1136376 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:05:19.436518 1136376 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 21:05:19.436587 1136376 node_conditions.go:123] node cpu capacity is 2
	I0717 21:05:19.436615 1136376 node_conditions.go:105] duration metric: took 3.786508ms to run NodePressure ...
	I0717 21:05:19.436639 1136376 start.go:228] waiting for startup goroutines ...
	I0717 21:05:19.652328 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:19.773267 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:19.883901 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:19.894040 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:20.150126 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:20.269921 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:20.384772 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:20.394000 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:20.648210 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:20.801600 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:20.883860 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:20.896460 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:21.148427 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:21.271789 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:21.384151 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:21.395575 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:21.648408 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:21.770597 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:21.884831 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:21.894479 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:22.147306 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:22.269825 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:22.384492 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:22.394428 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:22.647833 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:22.770409 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:22.883679 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:22.893560 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:23.148284 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:23.270774 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:23.394842 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:23.400494 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:23.660012 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:23.770828 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:23.883934 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:23.896487 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:24.151434 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:24.269610 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:24.384070 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:24.393946 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:24.646883 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:24.769634 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:24.883775 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:24.895221 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:25.147353 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:25.271110 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:25.383975 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:25.393977 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:25.650984 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:25.772159 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:25.885915 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:25.895780 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:26.147292 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:26.270776 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:26.385233 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:26.396104 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:26.648405 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:26.783033 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:26.885469 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:26.893372 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:27.147810 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:27.269696 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:27.399753 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:27.408261 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:27.647288 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:27.769383 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:27.883726 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:27.893790 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:28.146794 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:28.270131 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:28.385313 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:28.402138 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:28.654866 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:28.770462 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:28.884515 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:28.895021 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:29.148070 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:29.271286 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:29.385462 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:29.394787 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:29.658551 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:29.769869 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:29.886768 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:29.894670 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:30.149135 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:30.270370 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:30.384754 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:30.394901 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:30.657412 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:30.770000 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:30.883660 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:30.893685 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:31.149556 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:31.269217 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:31.384295 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:31.393941 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:31.646953 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:31.769859 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:31.884541 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:31.894268 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:32.156397 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:32.272233 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:32.387709 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:32.399180 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:32.648089 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:32.772945 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:32.884906 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:32.896016 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:33.147540 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:33.270040 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:33.384102 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:33.393969 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:33.648655 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:33.770272 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:33.884515 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:33.894224 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:34.152703 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:34.269791 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:34.386496 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:34.394074 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:34.647310 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:34.770266 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:34.883780 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:34.893619 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:35.153691 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:35.270722 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:35.384605 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:35.393756 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:35.646386 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:35.770280 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:35.884389 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:35.893916 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:36.146588 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:36.269415 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:36.388557 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:36.395666 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:36.646782 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:36.769746 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:36.907381 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:36.910082 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:37.147698 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:37.274916 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:37.385374 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:37.394493 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:37.646644 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:37.771829 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:37.884603 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:37.904223 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:38.151379 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:38.270697 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:38.389699 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:38.395041 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:38.652038 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:38.770719 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:38.885206 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:38.894699 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:39.153936 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:39.270319 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:39.389289 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:39.398916 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:39.648080 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:39.770280 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:39.885282 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:39.896596 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:40.146773 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:40.270604 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:40.384041 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:40.393933 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:40.646857 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:40.770008 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:40.884483 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:05:40.894289 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:41.152157 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:41.269718 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:41.390440 1136376 kapi.go:107] duration metric: took 1m6.02252533s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 21:05:41.400605 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:41.654890 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:41.770148 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:41.902705 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:42.154411 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:42.270440 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:42.394952 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:42.647570 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:42.770611 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:42.895208 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:43.148005 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:43.270572 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:43.404244 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:43.648724 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:43.770789 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:43.896468 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:44.148505 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:44.295754 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:44.421231 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:44.646853 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:44.770777 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:44.893740 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:45.173542 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:45.272336 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:45.409925 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:45.647615 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:45.770661 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:45.894440 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:46.147917 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:46.270010 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:46.395215 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:46.649355 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:46.770623 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:46.894265 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:47.148201 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:47.270502 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:47.395128 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:47.648095 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:47.770737 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:47.893621 1136376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:05:48.146390 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:48.269915 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:48.394117 1136376 kapi.go:107] duration metric: took 1m13.022844109s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 21:05:48.649268 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:48.770323 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:49.155569 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:49.272482 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:49.646385 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:49.770067 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:50.148431 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:50.271429 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:50.646572 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:50.770205 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:51.148478 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:51.269596 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:51.663653 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:51.771550 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:52.172960 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:52.271368 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:52.646133 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:52.770090 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:05:53.147274 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:53.269772 1136376 kapi.go:107] duration metric: took 1m12.523159675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 21:05:53.271700 1136376 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-966885 cluster.
	I0717 21:05:53.273304 1136376 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 21:05:53.274823 1136376 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 21:05:53.647368 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:54.148010 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:54.648436 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:55.146390 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:55.648523 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:56.152029 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:56.646965 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:57.147234 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:57.648046 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:58.147048 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:58.646938 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:59.147976 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:05:59.648142 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:00.198926 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:00.647228 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:01.149723 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:01.647226 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:02.146297 1136376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:06:02.646604 1136376 kapi.go:107] duration metric: took 1m27.016892962s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 21:06:02.649104 1136376 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 21:06:02.650803 1136376 addons.go:502] enable addons completed in 1m33.381453962s: enabled=[cloud-spanner storage-provisioner ingress-dns default-storageclass inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 21:06:02.650856 1136376 start.go:233] waiting for cluster config update ...
	I0717 21:06:02.650876 1136376 start.go:242] writing updated cluster config ...
	I0717 21:06:02.651229 1136376 ssh_runner.go:195] Run: rm -f paused
	I0717 21:06:02.724626 1136376 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 21:06:02.726653 1136376 out.go:177] * Done! kubectl is now configured to use "addons-966885" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 21:09:19 addons-966885 conmon[4591]: conmon 817afa02a8d70f997d2a <ninfo>: container 4603 exited with status 137
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.815110810Z" level=info msg="Stopped container 817afa02a8d70f997d2ae0bdac19c3012b08c08185fff85804372deccf0494e5: ingress-nginx/ingress-nginx-controller-7799c6795f-nvjrw/controller" id=daabe1b4-6541-4c8c-bb6b-66519ce4d56d name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.815677972Z" level=info msg="Stopping pod sandbox: 870984609d4ab99ad4e3e1c7751e5d740d5d3d6d4b68f3cfe90a7bc0b885d8f3" id=f452dd69-43df-4d82-b0d0-ffbb03c7147d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.820426021Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3FY7RQLHBOOV4KAF - [0:0]\n:KUBE-HP-VUPOA4F2CAWB52MB - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-VUPOA4F2CAWB52MB\n-X KUBE-HP-3FY7RQLHBOOV4KAF\nCOMMIT\n"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.822089174Z" level=info msg="Closing host port tcp:80"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.822141104Z" level=info msg="Closing host port tcp:443"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.823782628Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.823822858Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.823998817Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-nvjrw Namespace:ingress-nginx ID:870984609d4ab99ad4e3e1c7751e5d740d5d3d6d4b68f3cfe90a7bc0b885d8f3 UID:e0ee3adb-86d6-4def-9879-9bf12bf0836e NetNS:/var/run/netns/d2dfd54d-c08a-4e9f-9710-326a5885018b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.824145196Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-nvjrw from CNI network \"kindnet\" (type=ptp)"
	Jul 17 21:09:19 addons-966885 crio[886]: time="2023-07-17 21:09:19.850701492Z" level=info msg="Stopped pod sandbox: 870984609d4ab99ad4e3e1c7751e5d740d5d3d6d4b68f3cfe90a7bc0b885d8f3" id=f452dd69-43df-4d82-b0d0-ffbb03c7147d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.370168389Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=b5f4fb49-b117-4cd2-aaf9-ef388a6f5081 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.370464479Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=b5f4fb49-b117-4cd2-aaf9-ef388a6f5081 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.372143943Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=a4def717-e040-476d-9c79-ce5ebbd7b825 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.372394971Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=a4def717-e040-476d-9c79-ce5ebbd7b825 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.374944146Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-4dm42/hello-world-app" id=e43fc188-0569-4c99-9f67-a45fb6c495f8 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.375064129Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.469511389Z" level=info msg="Created container b09b8b4551bf771414f1d49b0acbc02b8ebb7efb75ec58b09c30d24504b24396: default/hello-world-app-65bdb79f98-4dm42/hello-world-app" id=e43fc188-0569-4c99-9f67-a45fb6c495f8 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.470453041Z" level=info msg="Starting container: b09b8b4551bf771414f1d49b0acbc02b8ebb7efb75ec58b09c30d24504b24396" id=87d33f11-efb9-4826-805b-9fd2a5b558f1 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 21:09:20 addons-966885 conmon[7747]: conmon b09b8b4551bf771414f1 <ninfo>: container 7758 exited with status 1
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.499887382Z" level=info msg="Started container" PID=7758 containerID=b09b8b4551bf771414f1d49b0acbc02b8ebb7efb75ec58b09c30d24504b24396 description=default/hello-world-app-65bdb79f98-4dm42/hello-world-app id=87d33f11-efb9-4826-805b-9fd2a5b558f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b650f481d36350c6530bfd8a7f4316198a9c016501b783e310105d71aa4ac6f
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.823277300Z" level=info msg="Removing container: 817afa02a8d70f997d2ae0bdac19c3012b08c08185fff85804372deccf0494e5" id=5abbf7b5-5afa-41ea-8f4b-aaeb3e284118 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.854264270Z" level=info msg="Removed container 817afa02a8d70f997d2ae0bdac19c3012b08c08185fff85804372deccf0494e5: ingress-nginx/ingress-nginx-controller-7799c6795f-nvjrw/controller" id=5abbf7b5-5afa-41ea-8f4b-aaeb3e284118 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.855660839Z" level=info msg="Removing container: 4b1bfcfea867a0c5aadac9980377581fae2defe0b507cf3283c32ee94eff9b07" id=49eac30f-a0b5-4be1-bd73-63fcdee43cc1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 21:09:20 addons-966885 crio[886]: time="2023-07-17 21:09:20.878669635Z" level=info msg="Removed container 4b1bfcfea867a0c5aadac9980377581fae2defe0b507cf3283c32ee94eff9b07: default/hello-world-app-65bdb79f98-4dm42/hello-world-app" id=49eac30f-a0b5-4be1-bd73-63fcdee43cc1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b09b8b4551bf7       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                             6 seconds ago       Exited              hello-world-app           2                   3b650f481d363       hello-world-app-65bdb79f98-4dm42
	570bb2c65bbe1       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   536dc17af7e7d       nginx
	96878a5d7c3e7       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        3 minutes ago       Running             headlamp                  0                   6f809d2903096       headlamp-66f6498c69-xs69b
	3519ade278cbd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   5fca0ebe095ac       gcp-auth-58478865f7-vn5md
	2f8f790cbadf6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              patch                     0                   59cf46fcab2fe       ingress-nginx-admission-patch-p72kg
	1e4f66e84aaa6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              create                    0                   8868cb9ebc79c       ingress-nginx-admission-create-jp2f4
	e6c174538d161       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   10ed5d41155e4       storage-provisioner
	2e5a58ec11593       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   b8003834b2964       coredns-5d78c9869d-lnw6x
	080f837576954       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             4 minutes ago       Running             kindnet-cni               0                   da6e7de2405e5       kindnet-xr4zk
	5ffe69ef1ea8f       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                                             4 minutes ago       Running             kube-proxy                0                   7d4be24cd82a7       kube-proxy-p6qqp
	8264bc5e01c56       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                                             5 minutes ago       Running             kube-controller-manager   0                   d4833e4ca53b4       kube-controller-manager-addons-966885
	4093d666d14b8       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                                             5 minutes ago       Running             kube-apiserver            0                   91c055a1c9d59       kube-apiserver-addons-966885
	1dd864a6d04ff       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                                             5 minutes ago       Running             kube-scheduler            0                   1e06b15397da2       kube-scheduler-addons-966885
	41dc482582f53       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                             5 minutes ago       Running             etcd                      0                   400120bf900f4       etcd-addons-966885
	
	* 
	* ==> coredns [2e5a58ec115938cb74d97aafe3b03acce9d5814bfcdfdc5b1ed4b1ee6fa2b289] <==
	* [INFO] 10.244.0.16:52397 - 41885 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004055s
	[INFO] 10.244.0.16:50764 - 60022 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002141323s
	[INFO] 10.244.0.16:52397 - 15516 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001563346s
	[INFO] 10.244.0.16:50764 - 51828 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002994713s
	[INFO] 10.244.0.16:52397 - 23031 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003080554s
	[INFO] 10.244.0.16:52397 - 13390 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00013431s
	[INFO] 10.244.0.16:50764 - 43378 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067167s
	[INFO] 10.244.0.16:57551 - 37423 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000132152s
	[INFO] 10.244.0.16:37825 - 57089 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076562s
	[INFO] 10.244.0.16:37825 - 20452 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044587s
	[INFO] 10.244.0.16:57551 - 1801 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067783s
	[INFO] 10.244.0.16:37825 - 18003 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005929s
	[INFO] 10.244.0.16:57551 - 20296 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045144s
	[INFO] 10.244.0.16:37825 - 5778 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065797s
	[INFO] 10.244.0.16:57551 - 50752 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045185s
	[INFO] 10.244.0.16:37825 - 5512 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000062212s
	[INFO] 10.244.0.16:57551 - 34560 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000114346s
	[INFO] 10.244.0.16:37825 - 47245 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058125s
	[INFO] 10.244.0.16:57551 - 6132 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00014025s
	[INFO] 10.244.0.16:57551 - 62660 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00093531s
	[INFO] 10.244.0.16:37825 - 26901 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001279399s
	[INFO] 10.244.0.16:57551 - 9453 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000992738s
	[INFO] 10.244.0.16:57551 - 23711 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000084094s
	[INFO] 10.244.0.16:37825 - 12446 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001197882s
	[INFO] 10.244.0.16:37825 - 10419 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047007s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-966885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-966885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=addons-966885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_04_17_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-966885
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:04:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-966885
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:09:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:09:22 +0000   Mon, 17 Jul 2023 21:04:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:09:22 +0000   Mon, 17 Jul 2023 21:04:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:09:22 +0000   Mon, 17 Jul 2023 21:04:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:09:22 +0000   Mon, 17 Jul 2023 21:05:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-966885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ab74c3f71e4fcf91d82fa2ef621ac1
	  System UUID:                4023f661-2423-4034-8a50-5b41288b8c98
	  Boot ID:                    30727b23-eda1-49fe-8b46-0f11c052162c
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-4dm42         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  gcp-auth                    gcp-auth-58478865f7-vn5md                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  headlamp                    headlamp-66f6498c69-xs69b                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 coredns-5d78c9869d-lnw6x                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m56s
	  kube-system                 etcd-addons-966885                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kindnet-xr4zk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-addons-966885             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-addons-966885    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-proxy-p6qqp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-addons-966885             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m52s                  kube-proxy       
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node addons-966885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node addons-966885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x8 over 5m19s)  kubelet          Node addons-966885 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s                  kubelet          Node addons-966885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s                  kubelet          Node addons-966885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s                  kubelet          Node addons-966885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m58s                  node-controller  Node addons-966885 event: Registered Node addons-966885 in Controller
	  Normal  NodeReady                4m23s                  kubelet          Node addons-966885 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001097] FS-Cache: O-key=[8] '49d5c90000000000'
	[  +0.000700] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=000000007afde1f6
	[  +0.001074] FS-Cache: N-key=[8] '49d5c90000000000'
	[  +0.002599] FS-Cache: Duplicate cookie detected
	[  +0.000674] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000944] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=00000000e27303ae
	[  +0.001088] FS-Cache: O-key=[8] '49d5c90000000000'
	[  +0.000697] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=00000000ab96e722
	[  +0.001044] FS-Cache: N-key=[8] '49d5c90000000000'
	[  +1.758024] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000990] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=000000001063103b
	[  +0.001090] FS-Cache: O-key=[8] '48d5c90000000000'
	[  +0.000695] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=000000002fa3e7b2
	[  +0.001041] FS-Cache: N-key=[8] '48d5c90000000000'
	[  +0.407711] FS-Cache: Duplicate cookie detected
	[  +0.000768] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001001] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=00000000a1180baf
	[  +0.001057] FS-Cache: O-key=[8] '4ed5c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=0000000066824c37
	[  +0.001091] FS-Cache: N-key=[8] '4ed5c90000000000'
	
	* 
	* ==> etcd [41dc482582f532c2b9b88172f19ba44d6a729d0709f3af2ab52911f86dc3d42a] <==
	* {"level":"info","ts":"2023-07-17T21:04:33.132Z","caller":"traceutil/trace.go:171","msg":"trace[894228347] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:393; }","duration":"229.17911ms","start":"2023-07-17T21:04:32.903Z","end":"2023-07-17T21:04:33.132Z","steps":["trace[894228347] 'agreement among raft nodes before linearized reading'  (duration: 91.768017ms)","trace[894228347] 'range keys from in-memory index tree'  (duration: 137.268726ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.098304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.158632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:04:33.133Z","caller":"traceutil/trace.go:171","msg":"trace[2112080461] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:393; }","duration":"230.26047ms","start":"2023-07-17T21:04:32.903Z","end":"2023-07-17T21:04:33.133Z","steps":["trace[2112080461] 'agreement among raft nodes before linearized reading'  (duration: 91.987356ms)","trace[2112080461] 'range keys from in-memory index tree'  (duration: 137.981496ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.263568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5d78c9869d\" ","response":"range_response_count:1 size:3635"}
	{"level":"info","ts":"2023-07-17T21:04:33.134Z","caller":"traceutil/trace.go:171","msg":"trace[1372278283] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5d78c9869d; range_end:; response_count:1; response_revision:393; }","duration":"271.303633ms","start":"2023-07-17T21:04:32.862Z","end":"2023-07-17T21:04:33.134Z","steps":["trace[1372278283] 'agreement among raft nodes before linearized reading'  (duration: 132.852894ms)","trace[1372278283] 'range keys from in-memory index tree'  (duration: 138.390341ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:04:33.133Z","caller":"traceutil/trace.go:171","msg":"trace[1161803385] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"271.220302ms","start":"2023-07-17T21:04:32.862Z","end":"2023-07-17T21:04:33.133Z","steps":["trace[1161803385] 'agreement among raft nodes before linearized reading'  (duration: 132.946736ms)","trace[1161803385] 'range keys from in-memory index tree'  (duration: 138.201928ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.584629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2023-07-17T21:04:33.155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.263352ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128022497249607832 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/cloud-spanner-emulator.1772c3993159436b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/cloud-spanner-emulator.1772c3993159436b\" value_size:598 lease:8128022497249607409 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T21:04:33.140Z","caller":"traceutil/trace.go:171","msg":"trace[1583299335] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:393; }","duration":"237.724915ms","start":"2023-07-17T21:04:32.902Z","end":"2023-07-17T21:04:33.140Z","steps":["trace[1583299335] 'agreement among raft nodes before linearized reading'  (duration: 93.234657ms)","trace[1583299335] 'range keys from in-memory index tree'  (duration: 138.332044ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:04:33.155Z","caller":"traceutil/trace.go:171","msg":"trace[666401297] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"185.600276ms","start":"2023-07-17T21:04:32.970Z","end":"2023-07-17T21:04:33.155Z","steps":["trace[666401297] 'process raft request'  (duration: 28.051027ms)","trace[666401297] 'compare'  (duration: 155.142269ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:04:33.165Z","caller":"traceutil/trace.go:171","msg":"trace[50985805] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"187.019976ms","start":"2023-07-17T21:04:32.978Z","end":"2023-07-17T21:04:33.165Z","steps":["trace[50985805] 'process raft request'  (duration: 176.295366ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:04:33.166Z","caller":"traceutil/trace.go:171","msg":"trace[578147423] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"187.656833ms","start":"2023-07-17T21:04:32.978Z","end":"2023-07-17T21:04:33.166Z","steps":["trace[578147423] 'process raft request'  (duration: 176.272597ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:04:33.167Z","caller":"traceutil/trace.go:171","msg":"trace[894676572] linearizableReadLoop","detail":"{readStateIndex:405; appliedIndex:402; }","duration":"180.551329ms","start":"2023-07-17T21:04:32.986Z","end":"2023-07-17T21:04:33.167Z","steps":["trace[894676572] 'read index received'  (duration: 43.880067ms)","trace[894676572] 'applied index is now lower than readState.Index'  (duration: 136.670146ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:04:33.167Z","caller":"traceutil/trace.go:171","msg":"trace[846853337] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"166.528409ms","start":"2023-07-17T21:04:33.000Z","end":"2023-07-17T21:04:33.167Z","steps":["trace[846853337] 'process raft request'  (duration: 154.437966ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:04:33.178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.083161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:04:33.178Z","caller":"traceutil/trace.go:171","msg":"trace[1088875759] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:398; }","duration":"199.175772ms","start":"2023-07-17T21:04:32.979Z","end":"2023-07-17T21:04:33.178Z","steps":["trace[1088875759] 'agreement among raft nodes before linearized reading'  (duration: 199.047189ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:04:33.193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.559042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-xr4zk\" ","response":"range_response_count:1 size:3689"}
	{"level":"info","ts":"2023-07-17T21:04:33.194Z","caller":"traceutil/trace.go:171","msg":"trace[1296581025] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-xr4zk; range_end:; response_count:1; response_revision:399; }","duration":"153.488486ms","start":"2023-07-17T21:04:33.040Z","end":"2023-07-17T21:04:33.193Z","steps":["trace[1296581025] 'agreement among raft nodes before linearized reading'  (duration: 152.204885ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.720571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:04:33.234Z","caller":"traceutil/trace.go:171","msg":"trace[163495400] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:393; }","duration":"332.284608ms","start":"2023-07-17T21:04:32.902Z","end":"2023-07-17T21:04:33.234Z","steps":["trace[163495400] 'agreement among raft nodes before linearized reading'  (duration: 93.372429ms)","trace[163495400] 'range keys from in-memory index tree'  (duration: 138.358768ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:04:33.234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:04:32.902Z","time spent":"332.596977ms","remote":"127.0.0.1:40846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" "}
	{"level":"warn","ts":"2023-07-17T21:04:33.133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.053586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T21:04:33.241Z","caller":"traceutil/trace.go:171","msg":"trace[980707913] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:393; }","duration":"347.413719ms","start":"2023-07-17T21:04:32.893Z","end":"2023-07-17T21:04:33.241Z","steps":["trace[980707913] 'agreement among raft nodes before linearized reading'  (duration: 101.677866ms)","trace[980707913] 'range keys from in-memory index tree'  (duration: 138.372422ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:04:33.241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:04:32.893Z","time spent":"347.555749ms","remote":"127.0.0.1:40820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	
	* 
	* ==> gcp-auth [3519ade278cbd572463afdf97c0fa7e53affc5df9e64bbd9aca45a8ccbb3c8d0] <==
	* 2023/07/17 21:05:52 GCP Auth Webhook started!
	2023/07/17 21:06:10 Ready to marshal response ...
	2023/07/17 21:06:10 Ready to write response ...
	2023/07/17 21:06:10 Ready to marshal response ...
	2023/07/17 21:06:10 Ready to write response ...
	2023/07/17 21:06:10 Ready to marshal response ...
	2023/07/17 21:06:10 Ready to write response ...
	2023/07/17 21:06:13 Ready to marshal response ...
	2023/07/17 21:06:13 Ready to write response ...
	2023/07/17 21:06:27 Ready to marshal response ...
	2023/07/17 21:06:27 Ready to write response ...
	2023/07/17 21:06:38 Ready to marshal response ...
	2023/07/17 21:06:38 Ready to write response ...
	2023/07/17 21:07:02 Ready to marshal response ...
	2023/07/17 21:07:02 Ready to write response ...
	2023/07/17 21:09:01 Ready to marshal response ...
	2023/07/17 21:09:01 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:09:27 up  5:51,  0 users,  load average: 0.28, 1.43, 2.01
	Linux addons-966885 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [080f837576954ee16a2c2fde52cd77d4447b50546171733c3ada1a7eee9b337b] <==
	* I0717 21:07:24.534451       1 main.go:227] handling current node
	I0717 21:07:34.538133       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:07:34.538163       1 main.go:227] handling current node
	I0717 21:07:44.549074       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:07:44.549106       1 main.go:227] handling current node
	I0717 21:07:54.552970       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:07:54.553004       1 main.go:227] handling current node
	I0717 21:08:04.564469       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:04.564501       1 main.go:227] handling current node
	I0717 21:08:14.569114       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:14.569142       1 main.go:227] handling current node
	I0717 21:08:24.575262       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:24.575290       1 main.go:227] handling current node
	I0717 21:08:34.579369       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:34.579398       1 main.go:227] handling current node
	I0717 21:08:44.589501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:44.589528       1 main.go:227] handling current node
	I0717 21:08:54.597311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:08:54.597347       1 main.go:227] handling current node
	I0717 21:09:04.603839       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:09:04.603870       1 main.go:227] handling current node
	I0717 21:09:14.608341       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:09:14.608371       1 main.go:227] handling current node
	I0717 21:09:24.617616       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:09:24.617644       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4093d666d14b8087e10b70ed1e6e22b792f6a6a363442d6a1d206e44a2e277ab] <==
	* I0717 21:07:20.267177       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.274427       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0717 21:07:20.282060       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.282211       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.292204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.292252       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.317229       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.317360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.358902       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.358965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.381014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.381105       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.404331       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.407285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 21:07:20.420851       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 21:07:20.420910       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 21:07:21.359490       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 21:07:21.405074       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 21:07:21.466217       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 21:08:20.256083       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 21:08:20.256112       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 21:08:20.256150       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 21:08:20.256158       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 21:09:01.301665       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.100.155.179]
	
	* 
	* ==> kube-controller-manager [8264bc5e01c56b0b7fa8882fbaf9bb11501e5f658bd1a985792e3d51159d98df] <==
	* E0717 21:07:54.562174       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:07:54.585995       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:07:54.586029       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:07:57.839304       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:07:57.839338       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:08:20.840124       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:08:20.840158       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:08:24.281767       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:08:24.281800       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:08:25.980078       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:08:25.980112       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:08:27.056230       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:08:27.056265       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:08:53.823856       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:08:53.823888       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 21:09:01.023231       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 21:09:01.046292       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-4dm42"
	W0717 21:09:03.057281       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:09:03.057314       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:09:12.300645       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:09:12.300677       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 21:09:17.976160       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 21:09:17.976196       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 21:09:18.596037       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 21:09:18.619749       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [5ffe69ef1ea8fa677eaaca48b3f5d095e7d68f9051e67898cd26781a20d79ff3] <==
	* I0717 21:04:34.642069       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 21:04:34.642221       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 21:04:34.642293       1 server_others.go:554] "Using iptables proxy"
	I0717 21:04:34.720113       1 server_others.go:192] "Using iptables Proxier"
	I0717 21:04:34.720217       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 21:04:34.720251       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 21:04:34.720305       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 21:04:34.721140       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 21:04:34.723851       1 server.go:658] "Version info" version="v1.27.3"
	I0717 21:04:34.723935       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 21:04:34.726514       1 config.go:188] "Starting service config controller"
	I0717 21:04:34.726771       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 21:04:34.726875       1 config.go:97] "Starting endpoint slice config controller"
	I0717 21:04:34.728093       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 21:04:34.731889       1 config.go:315] "Starting node config controller"
	I0717 21:04:34.731995       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 21:04:34.827692       1 shared_informer.go:318] Caches are synced for service config
	I0717 21:04:34.835344       1 shared_informer.go:318] Caches are synced for node config
	I0717 21:04:34.835363       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [1dd864a6d04ff06ddb9b0a08876c955571c78c233d1f1cea07924e5de5ba3ea7] <==
	* W0717 21:04:13.515541       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:04:13.515588       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:04:13.515684       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 21:04:13.515721       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 21:04:13.515824       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:04:13.515864       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:04:13.515938       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:04:13.515974       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 21:04:13.516054       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:04:13.516090       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:04:13.516216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:04:13.516257       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:04:13.516332       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 21:04:13.516369       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 21:04:13.516463       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:04:13.516502       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 21:04:13.517697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:04:13.517734       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 21:04:13.517747       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:04:13.517757       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 21:04:13.517876       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:04:13.517894       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 21:04:13.517954       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 21:04:13.517974       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 21:04:14.705265       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 21:09:16 addons-966885 kubelet[1352]: E0717 21:09:16.569415    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/92c9c10ba0f58a204a1726fb6046b98c646b76ba9d91a71f91e2dfce8221d6e0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/92c9c10ba0f58a204a1726fb6046b98c646b76ba9d91a71f91e2dfce8221d6e0/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 21:09:16 addons-966885 kubelet[1352]: E0717 21:09:16.578837    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e4af6f1d08d2c00550f88ad7db426d30d753ec34bc520764828190fa6c23a5c0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e4af6f1d08d2c00550f88ad7db426d30d753ec34bc520764828190fa6c23a5c0/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 21:09:16 addons-966885 kubelet[1352]: E0717 21:09:16.593066    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/388a9333c8f8c9181e24d59ccf0d4969e79570e50b7596bdf0f8bbdf2b1d7c27/diff" to get inode usage: stat /var/lib/containers/storage/overlay/388a9333c8f8c9181e24d59ccf0d4969e79570e50b7596bdf0f8bbdf2b1d7c27/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 21:09:16 addons-966885 kubelet[1352]: E0717 21:09:16.598304    1352 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/76f86196c15c41061cbaa2aa8cc9442abf50b2af7edfc9e7fedbc37431e1f7f5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/76f86196c15c41061cbaa2aa8cc9442abf50b2af7edfc9e7fedbc37431e1f7f5/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 21:09:17 addons-966885 kubelet[1352]: I0717 21:09:17.251678    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5hzw\" (UniqueName: \"kubernetes.io/projected/fee2cc14-8383-4435-910e-25bbf22dbfb9-kube-api-access-z5hzw\") pod \"fee2cc14-8383-4435-910e-25bbf22dbfb9\" (UID: \"fee2cc14-8383-4435-910e-25bbf22dbfb9\") "
	Jul 17 21:09:17 addons-966885 kubelet[1352]: I0717 21:09:17.256146    1352 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee2cc14-8383-4435-910e-25bbf22dbfb9-kube-api-access-z5hzw" (OuterVolumeSpecName: "kube-api-access-z5hzw") pod "fee2cc14-8383-4435-910e-25bbf22dbfb9" (UID: "fee2cc14-8383-4435-910e-25bbf22dbfb9"). InnerVolumeSpecName "kube-api-access-z5hzw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:09:17 addons-966885 kubelet[1352]: I0717 21:09:17.352525    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z5hzw\" (UniqueName: \"kubernetes.io/projected/fee2cc14-8383-4435-910e-25bbf22dbfb9-kube-api-access-z5hzw\") on node \"addons-966885\" DevicePath \"\""
	Jul 17 21:09:17 addons-966885 kubelet[1352]: I0717 21:09:17.813262    1352 scope.go:115] "RemoveContainer" containerID="2c59a8bebeeacad7d6d0a56219baf79791824f635ea398975c07b29b7522d63f"
	Jul 17 21:09:18 addons-966885 kubelet[1352]: I0717 21:09:18.370348    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=fee2cc14-8383-4435-910e-25bbf22dbfb9 path="/var/lib/kubelet/pods/fee2cc14-8383-4435-910e-25bbf22dbfb9/volumes"
	Jul 17 21:09:18 addons-966885 kubelet[1352]: E0717 21:09:18.637972    1352 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-nvjrw.1772c3dbb5ede764", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-nvjrw", UID:"e0ee3adb-86d6-4def-9879-9bf12bf0836e", APIVersion:"v1", ResourceVersion:"761", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-966885"}, FirstTimestamp:time.Date(2023, time.July, 17, 21, 9, 18, 634878820, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 21, 9, 18, 634878820, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-nvjrw.1772c3dbb5ede764" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:09:19 addons-966885 kubelet[1352]: I0717 21:09:19.968780    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0ee3adb-86d6-4def-9879-9bf12bf0836e-webhook-cert\") pod \"e0ee3adb-86d6-4def-9879-9bf12bf0836e\" (UID: \"e0ee3adb-86d6-4def-9879-9bf12bf0836e\") "
	Jul 17 21:09:19 addons-966885 kubelet[1352]: I0717 21:09:19.968849    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvfs5\" (UniqueName: \"kubernetes.io/projected/e0ee3adb-86d6-4def-9879-9bf12bf0836e-kube-api-access-wvfs5\") pod \"e0ee3adb-86d6-4def-9879-9bf12bf0836e\" (UID: \"e0ee3adb-86d6-4def-9879-9bf12bf0836e\") "
	Jul 17 21:09:19 addons-966885 kubelet[1352]: I0717 21:09:19.972037    1352 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ee3adb-86d6-4def-9879-9bf12bf0836e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "e0ee3adb-86d6-4def-9879-9bf12bf0836e" (UID: "e0ee3adb-86d6-4def-9879-9bf12bf0836e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:09:19 addons-966885 kubelet[1352]: I0717 21:09:19.973303    1352 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ee3adb-86d6-4def-9879-9bf12bf0836e-kube-api-access-wvfs5" (OuterVolumeSpecName: "kube-api-access-wvfs5") pod "e0ee3adb-86d6-4def-9879-9bf12bf0836e" (UID: "e0ee3adb-86d6-4def-9879-9bf12bf0836e"). InnerVolumeSpecName "kube-api-access-wvfs5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.069092    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e0ee3adb-86d6-4def-9879-9bf12bf0836e-webhook-cert\") on node \"addons-966885\" DevicePath \"\""
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.069144    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wvfs5\" (UniqueName: \"kubernetes.io/projected/e0ee3adb-86d6-4def-9879-9bf12bf0836e-kube-api-access-wvfs5\") on node \"addons-966885\" DevicePath \"\""
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.369483    1352 scope.go:115] "RemoveContainer" containerID="4b1bfcfea867a0c5aadac9980377581fae2defe0b507cf3283c32ee94eff9b07"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.380236    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=18ffb79d-563f-4da9-b5d4-acc2382e06a6 path="/var/lib/kubelet/pods/18ffb79d-563f-4da9-b5d4-acc2382e06a6/volumes"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.380857    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4ab1a59c-b365-4dc5-bed2-1252afcb8833 path="/var/lib/kubelet/pods/4ab1a59c-b365-4dc5-bed2-1252afcb8833/volumes"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.381574    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e0ee3adb-86d6-4def-9879-9bf12bf0836e path="/var/lib/kubelet/pods/e0ee3adb-86d6-4def-9879-9bf12bf0836e/volumes"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.822056    1352 scope.go:115] "RemoveContainer" containerID="817afa02a8d70f997d2ae0bdac19c3012b08c08185fff85804372deccf0494e5"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.825200    1352 scope.go:115] "RemoveContainer" containerID="b09b8b4551bf771414f1d49b0acbc02b8ebb7efb75ec58b09c30d24504b24396"
	Jul 17 21:09:20 addons-966885 kubelet[1352]: E0717 21:09:20.825468    1352 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-4dm42_default(30ac866c-f0e8-4b54-8286-a9dbdd9c9e23)\"" pod="default/hello-world-app-65bdb79f98-4dm42" podUID=30ac866c-f0e8-4b54-8286-a9dbdd9c9e23
	Jul 17 21:09:20 addons-966885 kubelet[1352]: I0717 21:09:20.854536    1352 scope.go:115] "RemoveContainer" containerID="4b1bfcfea867a0c5aadac9980377581fae2defe0b507cf3283c32ee94eff9b07"
	Jul 17 21:09:22 addons-966885 kubelet[1352]: W0717 21:09:22.410353    1352 container.go:586] Failed to update stats for container "/docker/89bf4ecdccc27b30aa16bd71ea382c37b63a33bbc07ffe7564881be7c6b9da7b/crio-c81f344b24fe555834328d7582d621f5f4e1a13670447ca11b4eda625c2fef29": unable to determine device info for dir: /var/lib/containers/storage/overlay/e4af6f1d08d2c00550f88ad7db426d30d753ec34bc520764828190fa6c23a5c0/diff: stat failed on /var/lib/containers/storage/overlay/e4af6f1d08d2c00550f88ad7db426d30d753ec34bc520764828190fa6c23a5c0/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [e6c174538d161113d605b87d25978cfebacbe662d157798dd9a5a48e3811a0ad] <==
	* I0717 21:05:05.797669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:05:05.812600       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:05:05.812780       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:05:05.823285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:05:05.823838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ae050c52-62e3-4396-bdf9-d66a91836fbc", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-966885_55bfd5f1-c949-475f-9bd8-d3706aa2d0f6 became leader
	I0717 21:05:05.825485       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-966885_55bfd5f1-c949-475f-9bd8-d3706aa2d0f6!
	I0717 21:05:05.926483       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-966885_55bfd5f1-c949-475f-9bd8-d3706aa2d0f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-966885 -n addons-966885
helpers_test.go:261: (dbg) Run:  kubectl --context addons-966885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (170.91s)

                                                
                                    
x
+
TestErrorSpam/setup (32.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-526468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-526468 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-526468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-526468 --driver=docker  --container-runtime=crio: (32.642948072s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1"
error_spam_test.go:110: minikube stdout:
* [nospam-526468] minikube v1.30.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=16890
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node nospam-526468 in cluster nospam-526468
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-526468" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
--- FAIL: TestErrorSpam/setup (32.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-822297 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0717 21:16:30.436651 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-822297 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.849454665s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-822297 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-822297 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0cf32d69-ae3c-4e8e-9cfd-a55eb44973e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0cf32d69-ae3c-4e8e-9cfd-a55eb44973e0] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.016662491s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 21:18:24.313137 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.318493 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.328786 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.349025 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.389346 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.469626 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.630031 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:24.950564 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:25.591542 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:26.871754 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:29.433046 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:34.553894 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:18:44.794472 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-822297 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.471409912s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-822297 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0717 21:19:05.274718 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019380367s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons disable ingress-dns --alsologtostderr -v=1: (1.629134356s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons disable ingress --alsologtostderr -v=1: (7.552320269s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-822297
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-822297:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c",
	        "Created": "2023-07-17T21:14:59.348853017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1163105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:14:59.685553663Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c/hostname",
	        "HostsPath": "/var/lib/docker/containers/731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c/hosts",
	        "LogPath": "/var/lib/docker/containers/731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c/731cab890b151605cab00d56351ac568ed81c12fde2880e060fb35305158411c-json.log",
	        "Name": "/ingress-addon-legacy-822297",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-822297:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-822297",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65fb07dfa86da28accf8c20e2456ca634232e1f563025b1436407a0bbb3b69e3-init/diff:/var/lib/docker/overlay2/9dd04002488337def4cdbea3f3d72ef7a2164867b83574414c8b40a7e2f88109/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65fb07dfa86da28accf8c20e2456ca634232e1f563025b1436407a0bbb3b69e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65fb07dfa86da28accf8c20e2456ca634232e1f563025b1436407a0bbb3b69e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65fb07dfa86da28accf8c20e2456ca634232e1f563025b1436407a0bbb3b69e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-822297",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-822297/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-822297",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-822297",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-822297",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39e9713df2a9aa7b745f9aa9e09707329663937e1d2a178bdedd418e01a52282",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34041"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34040"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34038"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39e9713df2a9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-822297": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "731cab890b15",
	                        "ingress-addon-legacy-822297"
	                    ],
	                    "NetworkID": "74174cb9b0156b7f90af814b2776751f5e10e4a8426839d35f9700899e5b1cc9",
	                    "EndpointID": "fa049f8c083592d091ce0b885bfd37a74288bd4633f1e2528640cbc75839e8a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-822297 -n ingress-addon-legacy-822297
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-822297 logs -n 25: (1.410712414s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-812870 image load --daemon                                  | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-812870               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image ls                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| image   | functional-812870 image load --daemon                                  | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-812870               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image ls                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| image   | functional-812870 image save                                           | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-812870               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image rm                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-812870               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image ls                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| image   | functional-812870 image load                                           | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image ls                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| image   | functional-812870 image save --daemon                                  | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-812870               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870                                                      | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870                                                      | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870                                                      | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-812870 ssh pgrep                                            | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-812870                                                      | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-812870 image build -t                                       | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	|         | localhost/my-image:functional-812870                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-812870 image ls                                             | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| delete  | -p functional-812870                                                   | functional-812870           | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:14 UTC |
	| start   | -p ingress-addon-legacy-822297                                         | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:14 UTC | 17 Jul 23 21:16 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-822297                                            | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:16 UTC | 17 Jul 23 21:16 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-822297                                            | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:16 UTC | 17 Jul 23 21:16 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-822297                                            | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:16 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-822297 ip                                         | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:19 UTC | 17 Jul 23 21:19 UTC |
	| addons  | ingress-addon-legacy-822297                                            | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:19 UTC | 17 Jul 23 21:19 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-822297                                            | ingress-addon-legacy-822297 | jenkins | v1.30.1 | 17 Jul 23 21:19 UTC | 17 Jul 23 21:19 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:14:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:14:35.242367 1162650 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:14:35.242581 1162650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:14:35.242591 1162650 out.go:309] Setting ErrFile to fd 2...
	I0717 21:14:35.242597 1162650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:14:35.242888 1162650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:14:35.243392 1162650 out.go:303] Setting JSON to false
	I0717 21:14:35.244651 1162650 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21419,"bootTime":1689607057,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:14:35.244732 1162650 start.go:138] virtualization:  
	I0717 21:14:35.247060 1162650 out.go:177] * [ingress-addon-legacy-822297] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:14:35.249033 1162650 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:14:35.250706 1162650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:14:35.249214 1162650 notify.go:220] Checking for updates...
	I0717 21:14:35.254466 1162650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:14:35.256211 1162650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:14:35.258111 1162650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:14:35.259622 1162650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:14:35.261460 1162650 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:14:35.285574 1162650 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:14:35.285681 1162650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:14:35.378472 1162650 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 21:14:35.368888197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:14:35.378589 1162650 docker.go:294] overlay module found
	I0717 21:14:35.381456 1162650 out.go:177] * Using the docker driver based on user configuration
	I0717 21:14:35.382933 1162650 start.go:298] selected driver: docker
	I0717 21:14:35.382949 1162650 start.go:880] validating driver "docker" against <nil>
	I0717 21:14:35.382962 1162650 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:14:35.383622 1162650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:14:35.450524 1162650 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 21:14:35.441062259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:14:35.450688 1162650 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:14:35.450902 1162650 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:14:35.452508 1162650 out.go:177] * Using Docker driver with root privileges
	I0717 21:14:35.454271 1162650 cni.go:84] Creating CNI manager for ""
	I0717 21:14:35.454290 1162650 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:14:35.454304 1162650 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:14:35.454331 1162650 start_flags.go:319] config:
	{Name:ingress-addon-legacy-822297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-822297 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:14:35.456363 1162650 out.go:177] * Starting control plane node ingress-addon-legacy-822297 in cluster ingress-addon-legacy-822297
	I0717 21:14:35.458029 1162650 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:14:35.459682 1162650 out.go:177] * Pulling base image ...
	I0717 21:14:35.461195 1162650 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:14:35.461344 1162650 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:14:35.478785 1162650 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 21:14:35.478811 1162650 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 21:14:35.527777 1162650 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0717 21:14:35.527817 1162650 cache.go:57] Caching tarball of preloaded images
	I0717 21:14:35.527989 1162650 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:14:35.530039 1162650 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 21:14:35.531615 1162650 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 21:14:35.654281 1162650 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0717 21:14:51.571031 1162650 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 21:14:51.571135 1162650 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0717 21:14:52.680780 1162650 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 21:14:52.681133 1162650 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/config.json ...
	I0717 21:14:52.681185 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/config.json: {Name:mk6e13122d9aa37537040c44fd5fe20dd5ead6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:14:52.681375 1162650 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:14:52.681420 1162650 start.go:365] acquiring machines lock for ingress-addon-legacy-822297: {Name:mk170debce48d9b87839415f35032b64f263120f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:14:52.681480 1162650 start.go:369] acquired machines lock for "ingress-addon-legacy-822297" in 46.039µs
	I0717 21:14:52.681502 1162650 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-822297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-822297 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:14:52.681576 1162650 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:14:52.685000 1162650 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 21:14:52.685268 1162650 start.go:159] libmachine.API.Create for "ingress-addon-legacy-822297" (driver="docker")
	I0717 21:14:52.685316 1162650 client.go:168] LocalClient.Create starting
	I0717 21:14:52.685410 1162650 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem
	I0717 21:14:52.685454 1162650 main.go:141] libmachine: Decoding PEM data...
	I0717 21:14:52.685473 1162650 main.go:141] libmachine: Parsing certificate...
	I0717 21:14:52.685538 1162650 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem
	I0717 21:14:52.685563 1162650 main.go:141] libmachine: Decoding PEM data...
	I0717 21:14:52.685579 1162650 main.go:141] libmachine: Parsing certificate...
	I0717 21:14:52.685940 1162650 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-822297 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:14:52.702842 1162650 cli_runner.go:211] docker network inspect ingress-addon-legacy-822297 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:14:52.702934 1162650 network_create.go:281] running [docker network inspect ingress-addon-legacy-822297] to gather additional debugging logs...
	I0717 21:14:52.702955 1162650 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-822297
	W0717 21:14:52.720733 1162650 cli_runner.go:211] docker network inspect ingress-addon-legacy-822297 returned with exit code 1
	I0717 21:14:52.720770 1162650 network_create.go:284] error running [docker network inspect ingress-addon-legacy-822297]: docker network inspect ingress-addon-legacy-822297: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-822297 not found
	I0717 21:14:52.720786 1162650 network_create.go:286] output of [docker network inspect ingress-addon-legacy-822297]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-822297 not found
	
	** /stderr **
	I0717 21:14:52.720852 1162650 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:14:52.739200 1162650 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000dc1e60}
	I0717 21:14:52.739267 1162650 network_create.go:123] attempt to create docker network ingress-addon-legacy-822297 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 21:14:52.739330 1162650 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-822297 ingress-addon-legacy-822297
	I0717 21:14:52.805054 1162650 network_create.go:107] docker network ingress-addon-legacy-822297 192.168.49.0/24 created
	I0717 21:14:52.805082 1162650 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-822297" container
	I0717 21:14:52.805253 1162650 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:14:52.821598 1162650 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-822297 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-822297 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:14:52.839856 1162650 oci.go:103] Successfully created a docker volume ingress-addon-legacy-822297
	I0717 21:14:52.839952 1162650 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-822297-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-822297 --entrypoint /usr/bin/test -v ingress-addon-legacy-822297:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:14:54.332621 1162650 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-822297-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-822297 --entrypoint /usr/bin/test -v ingress-addon-legacy-822297:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.492617869s)
	I0717 21:14:54.332653 1162650 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-822297
	I0717 21:14:54.332679 1162650 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:14:54.332699 1162650 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:14:54.332789 1162650 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-822297:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:14:59.251647 1162650 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-822297:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.918812371s)
	I0717 21:14:59.251687 1162650 kic.go:199] duration metric: took 4.918985 seconds to extract preloaded images to volume
	W0717 21:14:59.251829 1162650 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:14:59.251937 1162650 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:14:59.330923 1162650 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-822297 --name ingress-addon-legacy-822297 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-822297 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-822297 --network ingress-addon-legacy-822297 --ip 192.168.49.2 --volume ingress-addon-legacy-822297:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:14:59.695127 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Running}}
	I0717 21:14:59.718893 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:14:59.751368 1162650 cli_runner.go:164] Run: docker exec ingress-addon-legacy-822297 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:14:59.855056 1162650 oci.go:144] the created container "ingress-addon-legacy-822297" has a running status.
	I0717 21:14:59.855086 1162650 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa...
	I0717 21:15:01.059619 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 21:15:01.059712 1162650 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:15:01.093858 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:15:01.119111 1162650 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:15:01.119131 1162650 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-822297 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:15:01.200602 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:15:01.223898 1162650 machine.go:88] provisioning docker machine ...
	I0717 21:15:01.223929 1162650 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-822297"
	I0717 21:15:01.224007 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:01.250649 1162650 main.go:141] libmachine: Using SSH client type: native
	I0717 21:15:01.251125 1162650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34041 <nil> <nil>}
	I0717 21:15:01.251138 1162650 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-822297 && echo "ingress-addon-legacy-822297" | sudo tee /etc/hostname
	I0717 21:15:01.414158 1162650 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-822297
	
	I0717 21:15:01.414299 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:01.434311 1162650 main.go:141] libmachine: Using SSH client type: native
	I0717 21:15:01.434757 1162650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34041 <nil> <nil>}
	I0717 21:15:01.434784 1162650 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-822297' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-822297/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-822297' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:15:01.566818 1162650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:15:01.566848 1162650 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:15:01.566872 1162650 ubuntu.go:177] setting up certificates
	I0717 21:15:01.566881 1162650 provision.go:83] configureAuth start
	I0717 21:15:01.566947 1162650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-822297
	I0717 21:15:01.589488 1162650 provision.go:138] copyHostCerts
	I0717 21:15:01.589533 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:15:01.589592 1162650 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:15:01.589604 1162650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:15:01.589683 1162650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:15:01.589772 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:15:01.589797 1162650 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:15:01.589802 1162650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:15:01.589835 1162650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:15:01.589899 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:15:01.589922 1162650 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:15:01.589927 1162650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:15:01.589952 1162650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:15:01.590058 1162650 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-822297 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-822297]
	I0717 21:15:02.675560 1162650 provision.go:172] copyRemoteCerts
	I0717 21:15:02.675652 1162650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:15:02.675703 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:02.698521 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:02.799774 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 21:15:02.799835 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:15:02.828366 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 21:15:02.828433 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0717 21:15:02.857273 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 21:15:02.857337 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:15:02.886985 1162650 provision.go:86] duration metric: configureAuth took 1.320091003s
	I0717 21:15:02.887019 1162650 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:15:02.887217 1162650 config.go:182] Loaded profile config "ingress-addon-legacy-822297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 21:15:02.887334 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:02.905416 1162650 main.go:141] libmachine: Using SSH client type: native
	I0717 21:15:02.905869 1162650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34041 <nil> <nil>}
	I0717 21:15:02.905890 1162650 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:15:03.195810 1162650 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:15:03.195835 1162650 machine.go:91] provisioned docker machine in 1.971914844s
	I0717 21:15:03.195844 1162650 client.go:171] LocalClient.Create took 10.51052287s
	I0717 21:15:03.195879 1162650 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-822297" took 10.510611813s
	I0717 21:15:03.195895 1162650 start.go:300] post-start starting for "ingress-addon-legacy-822297" (driver="docker")
	I0717 21:15:03.195904 1162650 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:15:03.195992 1162650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:15:03.196077 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:03.215317 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:03.312710 1162650 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:15:03.317029 1162650 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:15:03.317078 1162650 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:15:03.317091 1162650 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:15:03.317098 1162650 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:15:03.317112 1162650 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:15:03.317228 1162650 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:15:03.317314 1162650 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:15:03.317326 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /etc/ssl/certs/11358722.pem
	I0717 21:15:03.317437 1162650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:15:03.328261 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:15:03.356972 1162650 start.go:303] post-start completed in 161.062672ms
	I0717 21:15:03.357493 1162650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-822297
	I0717 21:15:03.375434 1162650 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/config.json ...
	I0717 21:15:03.375715 1162650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:15:03.375774 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:03.393492 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:03.483371 1162650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:15:03.489108 1162650 start.go:128] duration metric: createHost completed in 10.807517593s
	I0717 21:15:03.489132 1162650 start.go:83] releasing machines lock for "ingress-addon-legacy-822297", held for 10.807642377s
	I0717 21:15:03.489233 1162650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-822297
	I0717 21:15:03.506554 1162650 ssh_runner.go:195] Run: cat /version.json
	I0717 21:15:03.506587 1162650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:15:03.506606 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:03.506647 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:03.526363 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:03.526882 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	W0717 21:15:03.617743 1162650 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 21:15:03.617900 1162650 ssh_runner.go:195] Run: systemctl --version
	I0717 21:15:03.765756 1162650 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:15:03.918783 1162650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:15:03.924741 1162650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:15:03.949031 1162650 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:15:03.949118 1162650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:15:03.989743 1162650 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:15:03.989764 1162650 start.go:469] detecting cgroup driver to use...
	I0717 21:15:03.989796 1162650 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:15:03.989858 1162650 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:15:04.012738 1162650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:15:04.028085 1162650 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:15:04.028168 1162650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:15:04.045820 1162650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:15:04.065247 1162650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:15:04.161140 1162650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:15:04.268929 1162650 docker.go:212] disabling docker service ...
	I0717 21:15:04.269034 1162650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:15:04.292235 1162650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:15:04.307484 1162650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:15:04.402089 1162650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:15:04.510545 1162650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:15:04.524471 1162650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:15:04.546293 1162650 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 21:15:04.546366 1162650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:15:04.559264 1162650 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:15:04.559337 1162650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:15:04.572346 1162650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:15:04.584941 1162650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:15:04.597331 1162650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:15:04.609214 1162650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:15:04.620056 1162650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:15:04.630702 1162650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:15:04.722278 1162650 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:15:04.839782 1162650 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:15:04.839913 1162650 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:15:04.844947 1162650 start.go:537] Will wait 60s for crictl version
	I0717 21:15:04.845014 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:04.849734 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:15:04.902981 1162650 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 21:15:04.903122 1162650 ssh_runner.go:195] Run: crio --version
	I0717 21:15:04.948256 1162650 ssh_runner.go:195] Run: crio --version
	I0717 21:15:04.994419 1162650 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0717 21:15:04.996833 1162650 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-822297 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:15:05.016358 1162650 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 21:15:05.021677 1162650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:15:05.037069 1162650 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 21:15:05.037139 1162650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:15:05.093437 1162650 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 21:15:05.093527 1162650 ssh_runner.go:195] Run: which lz4
	I0717 21:15:05.098808 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0717 21:15:05.098931 1162650 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 21:15:05.104291 1162650 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 21:15:05.104327 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0717 21:15:07.283200 1162650 crio.go:444] Took 2.184314 seconds to copy over tarball
	I0717 21:15:07.283364 1162650 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 21:15:10.017671 1162650 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.734261293s)
	I0717 21:15:10.017723 1162650 crio.go:451] Took 2.734427 seconds to extract the tarball
	I0717 21:15:10.017734 1162650 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 21:15:10.109142 1162650 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:15:10.155607 1162650 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 21:15:10.155634 1162650 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 21:15:10.155687 1162650 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:10.155899 1162650 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:15:10.155974 1162650 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:15:10.156036 1162650 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:15:10.156100 1162650 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:15:10.156176 1162650 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 21:15:10.156236 1162650 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:15:10.156295 1162650 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 21:15:10.157208 1162650 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:15:10.157706 1162650 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:15:10.157896 1162650 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:10.157954 1162650 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:15:10.158189 1162650 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 21:15:10.158378 1162650 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 21:15:10.158448 1162650 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:15:10.158538 1162650 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:15:10.570982 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0717 21:15:10.593722 1162650 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.593928 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0717 21:15:10.597949 1162650 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.598112 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0717 21:15:10.605852 1162650 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.606033 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0717 21:15:10.630737 1162650 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.630936 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0717 21:15:10.641082 1162650 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.641454 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:15:10.647045 1162650 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0717 21:15:10.647099 1162650 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 21:15:10.647148 1162650 ssh_runner.go:195] Run: which crictl
	W0717 21:15:10.663279 1162650 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.663513 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 21:15:10.694362 1162650 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0717 21:15:10.694418 1162650 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:15:10.694472 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.765496 1162650 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0717 21:15:10.765543 1162650 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:15:10.765600 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.765793 1162650 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0717 21:15:10.765824 1162650 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 21:15:10.765869 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.793645 1162650 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0717 21:15:10.793687 1162650 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 21:15:10.793734 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.815043 1162650 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0717 21:15:10.815083 1162650 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:15:10.815132 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.815206 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 21:15:10.826491 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 21:15:10.826567 1162650 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0717 21:15:10.826601 1162650 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 21:15:10.826629 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:10.826698 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 21:15:10.826751 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 21:15:10.826809 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	W0717 21:15:10.878689 1162650 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 21:15:10.878864 1162650 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:10.935636 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0717 21:15:10.935725 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 21:15:10.982147 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 21:15:10.982214 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 21:15:10.982272 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 21:15:10.982336 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0717 21:15:10.982375 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 21:15:11.137799 1162650 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 21:15:11.137858 1162650 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:11.137912 1162650 ssh_runner.go:195] Run: which crictl
	I0717 21:15:11.137980 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 21:15:11.137995 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0717 21:15:11.142372 1162650 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:11.202343 1162650 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 21:15:11.202431 1162650 cache_images.go:92] LoadImages completed in 1.046783922s
	W0717 21:15:11.202514 1162650 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 21:15:11.202595 1162650 ssh_runner.go:195] Run: crio config
	I0717 21:15:11.267316 1162650 cni.go:84] Creating CNI manager for ""
	I0717 21:15:11.267340 1162650 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:15:11.267352 1162650 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:15:11.267369 1162650 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-822297 NodeName:ingress-addon-legacy-822297 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 21:15:11.267514 1162650 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-822297"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:15:11.267590 1162650 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-822297 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-822297 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:15:11.267653 1162650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 21:15:11.279043 1162650 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:15:11.279118 1162650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:15:11.290155 1162650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0717 21:15:11.311955 1162650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 21:15:11.333688 1162650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 21:15:11.356019 1162650 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:15:11.360625 1162650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:15:11.375077 1162650 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297 for IP: 192.168.49.2
	I0717 21:15:11.375110 1162650 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e5c72a7d7e3f9ffe23960b258dcb0da4448fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:11.375301 1162650 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key
	I0717 21:15:11.375371 1162650 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key
	I0717 21:15:11.375440 1162650 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key
	I0717 21:15:11.375474 1162650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt with IP's: []
	I0717 21:15:11.788400 1162650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt ...
	I0717 21:15:11.788433 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: {Name:mkd7a1375dbe13d79310a34d899373fc5f12526d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:11.788642 1162650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key ...
	I0717 21:15:11.788656 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key: {Name:mk32a939864fcec008ba3a9445f0eef6f074793b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:11.788748 1162650 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key.dd3b5fb2
	I0717 21:15:11.788768 1162650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:15:11.969977 1162650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt.dd3b5fb2 ...
	I0717 21:15:11.970008 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt.dd3b5fb2: {Name:mk5dce028d26000753ead1733f73f3d420120073 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:11.970188 1162650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key.dd3b5fb2 ...
	I0717 21:15:11.970200 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key.dd3b5fb2: {Name:mk82918a718224586d5559e2bfed515adaa3c2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:11.970282 1162650 certs.go:337] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt
	I0717 21:15:11.970368 1162650 certs.go:341] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key
	I0717 21:15:11.970426 1162650 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.key
	I0717 21:15:11.970438 1162650 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.crt with IP's: []
	I0717 21:15:12.465245 1162650 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.crt ...
	I0717 21:15:12.465276 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.crt: {Name:mka2fb1086d1fd19b25972080a7abab8c39833a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:12.465471 1162650 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.key ...
	I0717 21:15:12.465484 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.key: {Name:mkf4e173454b2935e68fd17db281f739ab1106ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:12.465574 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 21:15:12.465593 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 21:15:12.465607 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 21:15:12.465622 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 21:15:12.465632 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 21:15:12.465648 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 21:15:12.465662 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 21:15:12.465674 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 21:15:12.465753 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem (1338 bytes)
	W0717 21:15:12.465791 1162650 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872_empty.pem, impossibly tiny 0 bytes
	I0717 21:15:12.465805 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:15:12.465834 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:15:12.465865 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:15:12.465899 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem (1675 bytes)
	I0717 21:15:12.465948 1162650 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:15:12.465982 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:15:12.465998 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem -> /usr/share/ca-certificates/1135872.pem
	I0717 21:15:12.466011 1162650 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /usr/share/ca-certificates/11358722.pem
	I0717 21:15:12.466632 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:15:12.496435 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 21:15:12.524870 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:15:12.554107 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 21:15:12.582393 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:15:12.610449 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:15:12.639075 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:15:12.669579 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:15:12.699400 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:15:12.728959 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem --> /usr/share/ca-certificates/1135872.pem (1338 bytes)
	I0717 21:15:12.758609 1162650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /usr/share/ca-certificates/11358722.pem (1708 bytes)
	I0717 21:15:12.787688 1162650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:15:12.809342 1162650 ssh_runner.go:195] Run: openssl version
	I0717 21:15:12.816476 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11358722.pem && ln -fs /usr/share/ca-certificates/11358722.pem /etc/ssl/certs/11358722.pem"
	I0717 21:15:12.828123 1162650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11358722.pem
	I0717 21:15:12.832797 1162650 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:10 /usr/share/ca-certificates/11358722.pem
	I0717 21:15:12.832871 1162650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11358722.pem
	I0717 21:15:12.841686 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11358722.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 21:15:12.853459 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:15:12.865253 1162650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:15:12.870220 1162650 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:15:12.870337 1162650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:15:12.879230 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:15:12.890938 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135872.pem && ln -fs /usr/share/ca-certificates/1135872.pem /etc/ssl/certs/1135872.pem"
	I0717 21:15:12.903059 1162650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135872.pem
	I0717 21:15:12.907795 1162650 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:10 /usr/share/ca-certificates/1135872.pem
	I0717 21:15:12.907897 1162650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135872.pem
	I0717 21:15:12.916645 1162650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135872.pem /etc/ssl/certs/51391683.0"
	I0717 21:15:12.928575 1162650 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:15:12.933069 1162650 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:15:12.933239 1162650 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-822297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-822297 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:15:12.933331 1162650 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:15:12.933396 1162650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:15:12.974737 1162650 cri.go:89] found id: ""
	I0717 21:15:12.974808 1162650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:15:12.985721 1162650 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:15:12.996417 1162650 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 21:15:12.996499 1162650 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:15:13.009631 1162650 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:15:13.009688 1162650 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 21:15:13.066682 1162650 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 21:15:13.067134 1162650 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:15:13.120152 1162650 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:15:13.120221 1162650 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 21:15:13.120258 1162650 kubeadm.go:322] OS: Linux
	I0717 21:15:13.120305 1162650 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 21:15:13.120354 1162650 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 21:15:13.120403 1162650 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 21:15:13.120452 1162650 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 21:15:13.120500 1162650 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 21:15:13.120549 1162650 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 21:15:13.207431 1162650 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:15:13.207536 1162650 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:15:13.207631 1162650 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:15:13.458250 1162650 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:15:13.459989 1162650 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:15:13.460071 1162650 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:15:13.561620 1162650 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:15:13.565755 1162650 out.go:204]   - Generating certificates and keys ...
	I0717 21:15:13.565906 1162650 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:15:13.565995 1162650 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:15:14.042348 1162650 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:15:15.087779 1162650 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:15:15.793641 1162650 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:15:15.944889 1162650 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:15:16.550252 1162650 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:15:16.550434 1162650 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-822297 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:15:16.951531 1162650 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:15:16.952043 1162650 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-822297 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:15:17.981058 1162650 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:15:18.903915 1162650 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:15:19.315117 1162650 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:15:19.315523 1162650 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:15:19.923847 1162650 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:15:20.495190 1162650 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:15:21.272611 1162650 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:15:22.078265 1162650 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:15:22.079019 1162650 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:15:22.081288 1162650 out.go:204]   - Booting up control plane ...
	I0717 21:15:22.081392 1162650 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:15:22.092358 1162650 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:15:22.092435 1162650 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:15:22.092513 1162650 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:15:22.095198 1162650 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:15:34.098061 1162650 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002834 seconds
	I0717 21:15:34.098205 1162650 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:15:34.112528 1162650 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:15:34.632362 1162650 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:15:34.632530 1162650 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-822297 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 21:15:35.144752 1162650 kubeadm.go:322] [bootstrap-token] Using token: zr5vm3.ubbe4d2cn08m0ne1
	I0717 21:15:35.147692 1162650 out.go:204]   - Configuring RBAC rules ...
	I0717 21:15:35.147820 1162650 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:15:35.158033 1162650 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:15:35.174974 1162650 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:15:35.182024 1162650 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:15:35.187941 1162650 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:15:35.201384 1162650 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:15:35.216721 1162650 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:15:35.519668 1162650 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:15:35.583298 1162650 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:15:35.583317 1162650 kubeadm.go:322] 
	I0717 21:15:35.583374 1162650 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:15:35.583379 1162650 kubeadm.go:322] 
	I0717 21:15:35.583451 1162650 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:15:35.583456 1162650 kubeadm.go:322] 
	I0717 21:15:35.583480 1162650 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:15:35.583545 1162650 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:15:35.583594 1162650 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:15:35.583599 1162650 kubeadm.go:322] 
	I0717 21:15:35.583648 1162650 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:15:35.583718 1162650 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:15:35.583787 1162650 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:15:35.583792 1162650 kubeadm.go:322] 
	I0717 21:15:35.583871 1162650 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:15:35.583947 1162650 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:15:35.583952 1162650 kubeadm.go:322] 
	I0717 21:15:35.584037 1162650 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zr5vm3.ubbe4d2cn08m0ne1 \
	I0717 21:15:35.584137 1162650 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa \
	I0717 21:15:35.584159 1162650 kubeadm.go:322]     --control-plane 
	I0717 21:15:35.584163 1162650 kubeadm.go:322] 
	I0717 21:15:35.584243 1162650 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:15:35.584247 1162650 kubeadm.go:322] 
	I0717 21:15:35.584325 1162650 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zr5vm3.ubbe4d2cn08m0ne1 \
	I0717 21:15:35.584423 1162650 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa 
	I0717 21:15:35.585881 1162650 kubeadm.go:322] W0717 21:15:13.065763    1230 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 21:15:35.586089 1162650 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 21:15:35.586189 1162650 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:15:35.586307 1162650 kubeadm.go:322] W0717 21:15:22.086878    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 21:15:35.586424 1162650 kubeadm.go:322] W0717 21:15:22.089557    1230 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 21:15:35.586439 1162650 cni.go:84] Creating CNI manager for ""
	I0717 21:15:35.586448 1162650 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:15:35.588855 1162650 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 21:15:35.591560 1162650 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:15:35.606097 1162650 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0717 21:15:35.606116 1162650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:15:35.631069 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:15:36.080051 1162650 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:15:36.080205 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:36.080284 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=ingress-addon-legacy-822297 minikube.k8s.io/updated_at=2023_07_17T21_15_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:36.245271 1162650 ops.go:34] apiserver oom_adj: -16
	I0717 21:15:36.245362 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:36.843580 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:37.343757 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:37.843732 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:38.343571 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:38.843180 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:39.343476 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:39.843563 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:40.343795 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:40.843034 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:41.343682 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:41.843090 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:42.343614 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:42.843067 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:43.343109 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:43.843506 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:44.343691 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:44.843171 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:45.343644 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:45.844043 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:46.343259 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:46.843749 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:47.343649 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:47.843701 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:48.343108 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:48.843654 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:49.343545 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:49.843616 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:50.343810 1162650 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:15:50.485794 1162650 kubeadm.go:1081] duration metric: took 14.405655892s to wait for elevateKubeSystemPrivileges.
	I0717 21:15:50.485823 1162650 kubeadm.go:406] StartCluster complete in 37.552670709s
	I0717 21:15:50.485840 1162650 settings.go:142] acquiring lock: {Name:mkf49a04ad0833d4cf5e309fbf4dcc2866032ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:50.485899 1162650 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:15:50.486582 1162650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/kubeconfig: {Name:mkeb40f750a7362e9193faee51ea6ae2e33e893d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:15:50.487295 1162650 kapi.go:59] client config for ingress-addon-legacy-822297: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:15:50.488676 1162650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:15:50.489479 1162650 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 21:15:50.489550 1162650 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-822297"
	I0717 21:15:50.489564 1162650 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-822297"
	I0717 21:15:50.489626 1162650 host.go:66] Checking if "ingress-addon-legacy-822297" exists ...
	I0717 21:15:50.490102 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:15:50.490286 1162650 config.go:182] Loaded profile config "ingress-addon-legacy-822297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 21:15:50.490380 1162650 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 21:15:50.490503 1162650 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-822297"
	I0717 21:15:50.490519 1162650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-822297"
	I0717 21:15:50.490778 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:15:50.537201 1162650 kapi.go:59] client config for ingress-addon-legacy-822297: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:15:50.540300 1162650 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:15:50.543071 1162650 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:15:50.543094 1162650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:15:50.543168 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:50.569552 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:50.614821 1162650 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-822297"
	I0717 21:15:50.614871 1162650 host.go:66] Checking if "ingress-addon-legacy-822297" exists ...
	I0717 21:15:50.615369 1162650 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-822297 --format={{.State.Status}}
	I0717 21:15:50.651168 1162650 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:15:50.651201 1162650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:15:50.651260 1162650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-822297
	I0717 21:15:50.692570 1162650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34041 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/ingress-addon-legacy-822297/id_rsa Username:docker}
	I0717 21:15:50.747877 1162650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:15:50.773722 1162650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:15:50.876218 1162650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:15:51.225531 1162650 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 21:15:51.296163 1162650 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-822297" context rescaled to 1 replicas
	I0717 21:15:51.296214 1162650 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:15:51.298709 1162650 out.go:177] * Verifying Kubernetes components...
	I0717 21:15:51.300902 1162650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:15:51.372834 1162650 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 21:15:51.370986 1162650 kapi.go:59] client config for ingress-addon-legacy-822297: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:15:51.375531 1162650 addons.go:502] enable addons completed in 886.043409ms: enabled=[storage-provisioner default-storageclass]
	I0717 21:15:51.373343 1162650 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-822297" to be "Ready" ...
	I0717 21:15:53.383800 1162650 node_ready.go:58] node "ingress-addon-legacy-822297" has status "Ready":"False"
	I0717 21:15:55.384455 1162650 node_ready.go:58] node "ingress-addon-legacy-822297" has status "Ready":"False"
	I0717 21:15:57.384589 1162650 node_ready.go:58] node "ingress-addon-legacy-822297" has status "Ready":"False"
	I0717 21:15:59.384001 1162650 node_ready.go:49] node "ingress-addon-legacy-822297" has status "Ready":"True"
	I0717 21:15:59.384033 1162650 node_ready.go:38] duration metric: took 8.008463948s waiting for node "ingress-addon-legacy-822297" to be "Ready" ...
	I0717 21:15:59.384044 1162650 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:15:59.391176 1162650 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-g626b" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:01.397082 1162650 pod_ready.go:102] pod "coredns-66bff467f8-g626b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:15:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 21:16:03.397691 1162650 pod_ready.go:102] pod "coredns-66bff467f8-g626b" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 21:15:51 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 21:16:05.399627 1162650 pod_ready.go:102] pod "coredns-66bff467f8-g626b" in "kube-system" namespace has status "Ready":"False"
	I0717 21:16:07.399757 1162650 pod_ready.go:102] pod "coredns-66bff467f8-g626b" in "kube-system" namespace has status "Ready":"False"
	I0717 21:16:08.900472 1162650 pod_ready.go:92] pod "coredns-66bff467f8-g626b" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:08.900503 1162650 pod_ready.go:81] duration metric: took 9.509284185s waiting for pod "coredns-66bff467f8-g626b" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.900515 1162650 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.905392 1162650 pod_ready.go:92] pod "etcd-ingress-addon-legacy-822297" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:08.905419 1162650 pod_ready.go:81] duration metric: took 4.896716ms waiting for pod "etcd-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.905445 1162650 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.910341 1162650 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-822297" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:08.910366 1162650 pod_ready.go:81] duration metric: took 4.91046ms waiting for pod "kube-apiserver-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.910378 1162650 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.915358 1162650 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-822297" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:08.915381 1162650 pod_ready.go:81] duration metric: took 4.995604ms waiting for pod "kube-controller-manager-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.915394 1162650 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zf7fm" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.920580 1162650 pod_ready.go:92] pod "kube-proxy-zf7fm" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:08.920607 1162650 pod_ready.go:81] duration metric: took 5.205688ms waiting for pod "kube-proxy-zf7fm" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:08.920619 1162650 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:09.096021 1162650 request.go:628] Waited for 175.30294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-822297
	I0717 21:16:09.295300 1162650 request.go:628] Waited for 196.261012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-822297
	I0717 21:16:09.298417 1162650 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-822297" in "kube-system" namespace has status "Ready":"True"
	I0717 21:16:09.298443 1162650 pod_ready.go:81] duration metric: took 377.783884ms waiting for pod "kube-scheduler-ingress-addon-legacy-822297" in "kube-system" namespace to be "Ready" ...
	I0717 21:16:09.298457 1162650 pod_ready.go:38] duration metric: took 9.914396382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:16:09.298477 1162650 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:16:09.298538 1162650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:16:09.311671 1162650 api_server.go:72] duration metric: took 18.015425628s to wait for apiserver process to appear ...
	I0717 21:16:09.311696 1162650 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:16:09.311713 1162650 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 21:16:09.320741 1162650 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 21:16:09.321864 1162650 api_server.go:141] control plane version: v1.18.20
	I0717 21:16:09.321890 1162650 api_server.go:131] duration metric: took 10.187959ms to wait for apiserver health ...
	I0717 21:16:09.321898 1162650 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:16:09.495229 1162650 request.go:628] Waited for 173.26597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:16:09.501365 1162650 system_pods.go:59] 8 kube-system pods found
	I0717 21:16:09.501402 1162650 system_pods.go:61] "coredns-66bff467f8-g626b" [fbd8c131-acbf-404a-b6b2-52268694473e] Running
	I0717 21:16:09.501410 1162650 system_pods.go:61] "etcd-ingress-addon-legacy-822297" [c2881636-386c-4132-88d8-277807459515] Running
	I0717 21:16:09.501415 1162650 system_pods.go:61] "kindnet-cxmcn" [1982c965-ab3c-445a-a326-dc69299ed014] Running
	I0717 21:16:09.501420 1162650 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-822297" [1fe397ba-894c-448b-b937-948dfed6111a] Running
	I0717 21:16:09.501426 1162650 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-822297" [5c7873e7-59f8-4012-9fb2-b2ce6420e4ce] Running
	I0717 21:16:09.501430 1162650 system_pods.go:61] "kube-proxy-zf7fm" [ebf91b84-b894-4f6a-a296-ce47ebcbd92e] Running
	I0717 21:16:09.501436 1162650 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-822297" [13c1f427-508b-4ed4-96ce-a030c127ca2e] Running
	I0717 21:16:09.501440 1162650 system_pods.go:61] "storage-provisioner" [07a0eab4-d653-45a1-bfea-a0211a6dcbf3] Running
	I0717 21:16:09.501446 1162650 system_pods.go:74] duration metric: took 179.542788ms to wait for pod list to return data ...
	I0717 21:16:09.501459 1162650 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:16:09.695927 1162650 request.go:628] Waited for 194.357277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 21:16:09.698624 1162650 default_sa.go:45] found service account: "default"
	I0717 21:16:09.698653 1162650 default_sa.go:55] duration metric: took 197.184014ms for default service account to be created ...
	I0717 21:16:09.698665 1162650 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:16:09.896090 1162650 request.go:628] Waited for 197.339747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:16:09.902195 1162650 system_pods.go:86] 8 kube-system pods found
	I0717 21:16:09.902226 1162650 system_pods.go:89] "coredns-66bff467f8-g626b" [fbd8c131-acbf-404a-b6b2-52268694473e] Running
	I0717 21:16:09.902234 1162650 system_pods.go:89] "etcd-ingress-addon-legacy-822297" [c2881636-386c-4132-88d8-277807459515] Running
	I0717 21:16:09.902240 1162650 system_pods.go:89] "kindnet-cxmcn" [1982c965-ab3c-445a-a326-dc69299ed014] Running
	I0717 21:16:09.902245 1162650 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-822297" [1fe397ba-894c-448b-b937-948dfed6111a] Running
	I0717 21:16:09.902251 1162650 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-822297" [5c7873e7-59f8-4012-9fb2-b2ce6420e4ce] Running
	I0717 21:16:09.902255 1162650 system_pods.go:89] "kube-proxy-zf7fm" [ebf91b84-b894-4f6a-a296-ce47ebcbd92e] Running
	I0717 21:16:09.902262 1162650 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-822297" [13c1f427-508b-4ed4-96ce-a030c127ca2e] Running
	I0717 21:16:09.902266 1162650 system_pods.go:89] "storage-provisioner" [07a0eab4-d653-45a1-bfea-a0211a6dcbf3] Running
	I0717 21:16:09.902273 1162650 system_pods.go:126] duration metric: took 203.585541ms to wait for k8s-apps to be running ...
	I0717 21:16:09.902287 1162650 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:16:09.902349 1162650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:16:09.916316 1162650 system_svc.go:56] duration metric: took 14.020447ms WaitForService to wait for kubelet.
	I0717 21:16:09.916343 1162650 kubeadm.go:581] duration metric: took 18.620103423s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:16:09.916362 1162650 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:16:10.095813 1162650 request.go:628] Waited for 179.348687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 21:16:10.099148 1162650 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 21:16:10.099189 1162650 node_conditions.go:123] node cpu capacity is 2
	I0717 21:16:10.099203 1162650 node_conditions.go:105] duration metric: took 182.835338ms to run NodePressure ...
	I0717 21:16:10.099214 1162650 start.go:228] waiting for startup goroutines ...
	I0717 21:16:10.099244 1162650 start.go:233] waiting for cluster config update ...
	I0717 21:16:10.099257 1162650 start.go:242] writing updated cluster config ...
	I0717 21:16:10.099583 1162650 ssh_runner.go:195] Run: rm -f paused
	I0717 21:16:10.166541 1162650 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 21:16:10.168874 1162650 out.go:177] 
	W0717 21:16:10.171155 1162650 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 21:16:10.173091 1162650 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 21:16:10.174997 1162650 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-822297" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.704674138Z" level=info msg="Stopped container 8c9fa1b61a5184f16d50bf987020745f527ff8ceff2584ab56c5aea0f7d59817: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dv5b7/controller" id=7784db55-d20e-4d04-9335-9647bc927425 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.705334576Z" level=info msg="Stopping pod sandbox: 5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741" id=74344d32-a335-40ce-9cfa-a691e0a4edde name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.705697481Z" level=info msg="Stopped container 8c9fa1b61a5184f16d50bf987020745f527ff8ceff2584ab56c5aea0f7d59817: ingress-nginx/ingress-nginx-controller-7fcf777cb7-dv5b7/controller" id=788eae3a-a39c-4c13-8944-64d493fb3b95 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.706181582Z" level=info msg="Stopping pod sandbox: 5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741" id=be71792f-b1a8-495f-8cd5-16686ffa7817 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.708906199Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-NK3QWA5ASR6HIKR6 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-L3LYEM765MAWTG4W - [0:0]\n-X KUBE-HP-NK3QWA5ASR6HIKR6\n-X KUBE-HP-L3LYEM765MAWTG4W\nCOMMIT\n"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.710521056Z" level=info msg="Closing host port tcp:80"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.710569819Z" level=info msg="Closing host port tcp:443"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.711794671Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.711817375Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.711975397Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-dv5b7 Namespace:ingress-nginx ID:5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741 UID:8a5155d6-c421-4ad6-badd-aef739d56721 NetNS:/var/run/netns/f5dc2a60-ab57-4a3b-91ee-551fc854df99 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.712149698Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-dv5b7 from CNI network \"kindnet\" (type=ptp)"
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.726777964Z" level=info msg="Stopped pod sandbox: 5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741" id=74344d32-a335-40ce-9cfa-a691e0a4edde name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 21:19:20 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:20.726902378Z" level=info msg="Stopped pod sandbox (already stopped): 5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741" id=be71792f-b1a8-495f-8cd5-16686ffa7817 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.077064672Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=2285f57f-3f1e-48b6-b6e3-f6d4c7e36be0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.077341209Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=2285f57f-3f1e-48b6-b6e3-f6d4c7e36be0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.078663948Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c8b44cf4-16c2-424d-8f7d-fbeea3777777 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.078879906Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c8b44cf4-16c2-424d-8f7d-fbeea3777777 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.079730933Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-r4rh8/hello-world-app" id=6089c78e-ab5c-43c2-9622-9b0afa8faa1d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.079831602Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.176312404Z" level=info msg="Created container 50aa4bc5dd64a633b4bce93325bd94e97f79632e47fec1802aa2cdf40201269b: default/hello-world-app-5f5d8b66bb-r4rh8/hello-world-app" id=6089c78e-ab5c-43c2-9622-9b0afa8faa1d name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.177397597Z" level=info msg="Starting container: 50aa4bc5dd64a633b4bce93325bd94e97f79632e47fec1802aa2cdf40201269b" id=84beb1e4-361f-488e-9da2-4121bc1f6f79 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jul 17 21:19:21 ingress-addon-legacy-822297 conmon[3797]: conmon 50aa4bc5dd64a633b4bc <ninfo>: container 3809 exited with status 1
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.195711544Z" level=info msg="Started container" PID=3809 containerID=50aa4bc5dd64a633b4bce93325bd94e97f79632e47fec1802aa2cdf40201269b description=default/hello-world-app-5f5d8b66bb-r4rh8/hello-world-app id=84beb1e4-361f-488e-9da2-4121bc1f6f79 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=f74068625b1e3383d635485c2a7dd3269e5ebd6339ac3104b14a92924f194859
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.581495792Z" level=info msg="Removing container: 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a" id=8118a904-7000-4f45-9ac8-0be661c4ab1e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jul 17 21:19:21 ingress-addon-legacy-822297 crio[899]: time="2023-07-17 21:19:21.618198780Z" level=info msg="Removed container 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a: default/hello-world-app-5f5d8b66bb-r4rh8/hello-world-app" id=8118a904-7000-4f45-9ac8-0be661c4ab1e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	50aa4bc5dd64a       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   5 seconds ago       Exited              hello-world-app           2                   f74068625b1e3       hello-world-app-5f5d8b66bb-r4rh8
	37f0d09e79dce       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   ba31174209188       nginx
	8c9fa1b61a518       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   5589386589d9b       ingress-nginx-controller-7fcf777cb7-dv5b7
	6a25f61ab4300       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   89e0739adbefd       ingress-nginx-admission-patch-wb9cj
	b785eb0e8a650       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   dbf72f006c5ba       ingress-nginx-admission-create-6gkpq
	d8fb68133e5de       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   54e831cf4c1a2       storage-provisioner
	930de88f02ac0       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   e8d1b4c30f333       coredns-66bff467f8-g626b
	1a1eb15312657       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   c5c21a02cf5c1       kindnet-cxmcn
	48e43d1eeef4d       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   505d1fbc9696d       kube-proxy-zf7fm
	d1765cb9c24e9       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   15e298f329c81       kube-scheduler-ingress-addon-legacy-822297
	53f933273e7d0       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   67d2c00408824       kube-apiserver-ingress-addon-legacy-822297
	7d1430d8490de       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   8bce0ca5483a3       etcd-ingress-addon-legacy-822297
	857698735f7e9       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   c53281258f88c       kube-controller-manager-ingress-addon-legacy-822297
	
	* 
	* ==> coredns [930de88f02ac06002df443b90aa0fb02b7bd361a1dcdbdfd27e25740db3b1d6c] <==
	* [INFO] 10.244.0.5:57249 - 10305 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000031377s
	[INFO] 10.244.0.5:48682 - 1248 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002376768s
	[INFO] 10.244.0.5:57249 - 11262 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000997496s
	[INFO] 10.244.0.5:57249 - 17036 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002336653s
	[INFO] 10.244.0.5:48682 - 17219 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00202591s
	[INFO] 10.244.0.5:48682 - 61871 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116718s
	[INFO] 10.244.0.5:57249 - 23199 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054178s
	[INFO] 10.244.0.5:47440 - 28406 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000096074s
	[INFO] 10.244.0.5:58409 - 29272 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051995s
	[INFO] 10.244.0.5:58409 - 62697 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040263s
	[INFO] 10.244.0.5:47440 - 43121 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028332s
	[INFO] 10.244.0.5:47440 - 7561 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039056s
	[INFO] 10.244.0.5:58409 - 60707 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028579s
	[INFO] 10.244.0.5:47440 - 8491 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034453s
	[INFO] 10.244.0.5:58409 - 14102 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026314s
	[INFO] 10.244.0.5:47440 - 19751 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036209s
	[INFO] 10.244.0.5:58409 - 30334 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028891s
	[INFO] 10.244.0.5:47440 - 61172 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036152s
	[INFO] 10.244.0.5:58409 - 29926 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038391s
	[INFO] 10.244.0.5:58409 - 8845 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002180567s
	[INFO] 10.244.0.5:47440 - 11687 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00205868s
	[INFO] 10.244.0.5:58409 - 36286 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000979855s
	[INFO] 10.244.0.5:58409 - 30958 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055623s
	[INFO] 10.244.0.5:47440 - 15552 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001375538s
	[INFO] 10.244.0.5:47440 - 14289 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060496s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-822297
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-822297
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=ingress-addon-legacy-822297
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_15_36_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:15:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-822297
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:19:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:19:09 +0000   Mon, 17 Jul 2023 21:15:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:19:09 +0000   Mon, 17 Jul 2023 21:15:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:19:09 +0000   Mon, 17 Jul 2023 21:15:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:19:09 +0000   Mon, 17 Jul 2023 21:15:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-822297
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 798ac0d41f6641329e80d3a8e4c4e370
	  System UUID:                5d5f6e04-784c-477f-929a-d1562f4d9ba4
	  Boot ID:                    30727b23-eda1-49fe-8b46-0f11c052162c
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-r4rh8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-g626b                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m35s
	  kube-system                 etcd-ingress-addon-legacy-822297                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kindnet-cxmcn                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m36s
	  kube-system                 kube-apiserver-ingress-addon-legacy-822297             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-822297    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-zf7fm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-scheduler-ingress-addon-legacy-822297             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s                kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s                kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s                kubelet     Node ingress-addon-legacy-822297 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m34s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m27s                kubelet     Node ingress-addon-legacy-822297 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001039] FS-Cache: O-key=[8] 'c5d6c90000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001009] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=0000000057de11d6
	[  +0.001024] FS-Cache: N-key=[8] 'c5d6c90000000000'
	[  +0.003172] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=00000000a1180baf
	[  +0.001034] FS-Cache: O-key=[8] 'c5d6c90000000000'
	[  +0.000718] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=00000000acae0e0a
	[  +0.001105] FS-Cache: N-key=[8] 'c5d6c90000000000'
	[Jul17 21:14] FS-Cache: Duplicate cookie detected
	[  +0.000809] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001182] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=000000002f515dbf
	[  +0.001342] FS-Cache: O-key=[8] 'c4d6c90000000000'
	[  +0.000851] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=0000000057de11d6
	[  +0.001218] FS-Cache: N-key=[8] 'c4d6c90000000000'
	[  +0.402823] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=000000001063103b
	[  +0.001105] FS-Cache: O-key=[8] 'cad6c90000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=00000000b6fac530
	[  +0.001155] FS-Cache: N-key=[8] 'cad6c90000000000'
	
	* 
	* ==> etcd [7d1430d8490dedc0cc0c666773ea125fd9e731d8fe99b4af9b12065d15df898e] <==
	* raft2023/07/17 21:15:26 INFO: aec36adc501070cc became follower at term 0
	raft2023/07/17 21:15:26 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/17 21:15:26 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/17 21:15:26 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 21:15:26.978106 W | auth: simple token is not cryptographically signed
	2023-07-17 21:15:27.041888 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 21:15:27.351463 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 21:15:27.351662 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 21:15:27.351801 I | embed: listening for peers on 192.168.49.2:2380
	2023-07-17 21:15:27.351934 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/17 21:15:27 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 21:15:27.352545 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/07/17 21:15:28 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 21:15:28 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 21:15:28 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 21:15:28 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 21:15:28 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 21:15:28.337914 I | etcdserver: published {Name:ingress-addon-legacy-822297 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 21:15:28.338170 I | embed: ready to serve client requests
	2023-07-17 21:15:28.339823 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-17 21:15:28.339932 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 21:15:28.340306 I | embed: ready to serve client requests
	2023-07-17 21:15:28.341665 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 21:15:28.355380 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 21:15:28.356264 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  21:19:26 up  6:01,  0 users,  load average: 0.29, 0.91, 1.52
	Linux ingress-addon-legacy-822297 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [1a1eb153126570f8a1ab5b83397f00cb456bcf4b0e383b3c0c45c87673c17be3] <==
	* I0717 21:17:25.214916       1 main.go:227] handling current node
	I0717 21:17:35.226953       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:17:35.226981       1 main.go:227] handling current node
	I0717 21:17:45.329882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:17:45.329915       1 main.go:227] handling current node
	I0717 21:17:55.333873       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:17:55.333901       1 main.go:227] handling current node
	I0717 21:18:05.344735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:05.344768       1 main.go:227] handling current node
	I0717 21:18:15.348154       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:15.348187       1 main.go:227] handling current node
	I0717 21:18:25.356571       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:25.356606       1 main.go:227] handling current node
	I0717 21:18:35.362530       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:35.362562       1 main.go:227] handling current node
	I0717 21:18:45.373669       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:45.373703       1 main.go:227] handling current node
	I0717 21:18:55.377131       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:18:55.377300       1 main.go:227] handling current node
	I0717 21:19:05.394434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:19:05.394464       1 main.go:227] handling current node
	I0717 21:19:15.406634       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:19:15.406662       1 main.go:227] handling current node
	I0717 21:19:25.410882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 21:19:25.410910       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [53f933273e7d0acb964195f9e7de88a97fb88354a5fcf397c41c9174de69a6a7] <==
	* E0717 21:15:32.650450       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 21:15:32.764885       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 21:15:32.776837       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 21:15:32.777487       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 21:15:32.783583       1 cache.go:39] Caches are synced for autoregister controller
	I0717 21:15:32.783894       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 21:15:33.475693       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 21:15:33.475724       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 21:15:33.483229       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 21:15:33.487617       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 21:15:33.487639       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 21:15:33.954135       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 21:15:33.991902       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 21:15:34.079960       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 21:15:34.081218       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 21:15:34.085480       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 21:15:34.942381       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 21:15:35.486712       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 21:15:35.549572       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 21:15:38.988958       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 21:15:50.512699       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 21:15:50.958238       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 21:16:11.094546       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 21:16:40.599908       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0717 21:19:18.525819       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [857698735f7e9f2c92b753b398ec4a868710eb4cd2a6aca80731688389e7be3b] <==
	* I0717 21:15:50.843437       1 shared_informer.go:230] Caches are synced for deployment 
	I0717 21:15:50.855648       1 shared_informer.go:230] Caches are synced for disruption 
	I0717 21:15:50.855682       1 disruption.go:339] Sending events to api server.
	I0717 21:15:50.895337       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0717 21:15:50.957298       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 21:15:50.957831       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 21:15:50.957650       1 shared_informer.go:230] Caches are synced for HPA 
	E0717 21:15:50.985008       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"bcfa6fd4-aa65-445d-9bf0-f26e64a49979", ResourceVersion:"209", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825225335, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019034c0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x40019034e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001903500), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40012ab9c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4001903520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001903540), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001903580)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000bfbef0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40010337b8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001a8310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40018c2460)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001033808)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0717 21:15:50.998236       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"582ee148-a0c9-43e4-8b45-c8cba374b7a6", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0717 21:15:51.004782       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 21:15:51.005142       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 21:15:51.014533       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 21:15:51.053735       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"52e32f82-3fa8-491e-a134-f60e97a9cf25", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-g626b
	E0717 21:15:51.057483       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"200f7e20-b7f0-4bc1-9411-f75dd34b93ef", ResourceVersion:"226", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63825225336, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230511-dc714da8\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019035e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001903600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001903620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001903640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001903660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001903680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230511-dc714da8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019036a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019036e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000fc6140), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001033a18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40001a8380), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40018c2468)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001033a60)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0717 21:16:00.340861       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0717 21:16:11.051978       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a1fc3dec-07dd-4ef8-943a-0000cfba6939", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 21:16:11.074299       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"011b8d31-9a89-4282-96b1-8da51644d988", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dv5b7
	I0717 21:16:11.128991       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a1f01739-5b85-445b-9c72-d9adc6586968", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-6gkpq
	I0717 21:16:11.177196       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3430d9a7-02b1-4985-b007-c5b1ec0adfb0", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wb9cj
	I0717 21:16:14.241821       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a1f01739-5b85-445b-9c72-d9adc6586968", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 21:16:15.214002       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3430d9a7-02b1-4985-b007-c5b1ec0adfb0", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 21:19:00.584062       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6d0a27a8-366a-4793-a9ab-a3e77011a4b7", APIVersion:"apps/v1", ResourceVersion:"713", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 21:19:00.613458       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"94ab59b6-90ac-445a-9b35-3748b32ffe3a", APIVersion:"apps/v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-r4rh8
	
	* 
	* ==> kube-proxy [48e43d1eeef4d0607f74add43d5b68a46eb8a3ed4582b6cbff977d61ad9dfd11] <==
	* W0717 21:15:52.794835       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 21:15:52.806639       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 21:15:52.806690       1 server_others.go:186] Using iptables Proxier.
	I0717 21:15:52.807050       1 server.go:583] Version: v1.18.20
	I0717 21:15:52.812493       1 config.go:133] Starting endpoints config controller
	I0717 21:15:52.812521       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 21:15:52.812585       1 config.go:315] Starting service config controller
	I0717 21:15:52.812596       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 21:15:52.915852       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 21:15:52.915861       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [d1765cb9c24e949c08c8a7c99b59980664e20c8a4d1c487a3b98a02eef9761f5] <==
	* I0717 21:15:32.681418       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 21:15:32.683311       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 21:15:32.683444       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 21:15:32.683457       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 21:15:32.683487       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 21:15:32.691231       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:15:32.691345       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 21:15:32.691444       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:15:32.691528       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:15:32.691734       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:15:32.709579       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:15:32.709978       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:15:32.710050       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:15:32.710121       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:15:32.710189       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:15:32.710207       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 21:15:32.715887       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:15:33.547201       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:15:33.646890       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:15:33.706894       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:15:33.719465       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:15:33.919758       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 21:15:37.083614       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0717 21:15:51.310560       1 factory.go:503] pod kube-system/coredns-66bff467f8-g626b is already present in the backoff queue
	E0717 21:15:51.381194       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Jul 17 21:19:04 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:04.547934    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b9fbdd073b0aec2676c9a87b40dd2b29229f867f19b32d08f8e82609c3e59ead
	Jul 17 21:19:04 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:04.548052    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a
	Jul 17 21:19:04 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:04.548300    1656 pod_workers.go:191] Error syncing pod 089db6b1-fee3-462b-b413-ed153cfeb9da ("hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"
	Jul 17 21:19:05 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:05.550499    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a
	Jul 17 21:19:05 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:05.550748    1656 pod_workers.go:191] Error syncing pod 089db6b1-fee3-462b-b413-ed153cfeb9da ("hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"
	Jul 17 21:19:08 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:08.077283    1656 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 21:19:08 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:08.077324    1656 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 21:19:08 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:08.077372    1656 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 21:19:08 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:08.077405    1656 pod_workers.go:191] Error syncing pod d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97 ("kube-ingress-dns-minikube_kube-system(d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 21:19:16 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:16.537078    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-f6rlf" (UniqueName: "kubernetes.io/secret/d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97-minikube-ingress-dns-token-f6rlf") pod "d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97" (UID: "d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97")
	Jul 17 21:19:16 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:16.541951    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97-minikube-ingress-dns-token-f6rlf" (OuterVolumeSpecName: "minikube-ingress-dns-token-f6rlf") pod "d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97" (UID: "d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97"). InnerVolumeSpecName "minikube-ingress-dns-token-f6rlf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:19:16 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:16.637440    1656 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-f6rlf" (UniqueName: "kubernetes.io/secret/d2850f5b-3f4c-4c38-8eec-ca2f5f64ba97-minikube-ingress-dns-token-f6rlf") on node "ingress-addon-legacy-822297" DevicePath ""
	Jul 17 21:19:18 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:18.505520    1656 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dv5b7.1772c46760c422a4", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dv5b7", UID:"8a5155d6-c421-4ad6-badd-aef739d56721", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-822297"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12589f59de446a4, ext:223072144386, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12589f59de446a4, ext:223072144386, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dv5b7.1772c46760c422a4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:19:18 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:18.516163    1656 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-dv5b7.1772c46760c422a4", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-dv5b7", UID:"8a5155d6-c421-4ad6-badd-aef739d56721", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-822297"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12589f59de446a4, ext:223072144386, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12589f59e5be2f5, ext:223079983187, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-dv5b7.1772c46760c422a4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 21:19:21 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:21.076503    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a
	Jul 17 21:19:21 ingress-addon-legacy-822297 kubelet[1656]: W0717 21:19:21.578046    1656 pod_container_deletor.go:77] Container "5589386589d9b9979c20a708594b7be3090ee029ec89503a6ce803529257b741" not found in pod's containers
	Jul 17 21:19:21 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:21.579665    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7f1d544a689967d9fbe61b07e2dda899aec8582dfbb18c7b01bdc5dd2109872a
	Jul 17 21:19:21 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:21.579910    1656 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 50aa4bc5dd64a633b4bce93325bd94e97f79632e47fec1802aa2cdf40201269b
	Jul 17 21:19:21 ingress-addon-legacy-822297 kubelet[1656]: E0717 21:19:21.580186    1656 pod_workers.go:191] Error syncing pod 089db6b1-fee3-462b-b413-ed153cfeb9da ("hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-r4rh8_default(089db6b1-fee3-462b-b413-ed153cfeb9da)"
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.550872    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-webhook-cert") pod "8a5155d6-c421-4ad6-badd-aef739d56721" (UID: "8a5155d6-c421-4ad6-badd-aef739d56721")
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.550933    1656 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-8thn5" (UniqueName: "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-ingress-nginx-token-8thn5") pod "8a5155d6-c421-4ad6-badd-aef739d56721" (UID: "8a5155d6-c421-4ad6-badd-aef739d56721")
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.557564    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8a5155d6-c421-4ad6-badd-aef739d56721" (UID: "8a5155d6-c421-4ad6-badd-aef739d56721"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.558560    1656 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-ingress-nginx-token-8thn5" (OuterVolumeSpecName: "ingress-nginx-token-8thn5") pod "8a5155d6-c421-4ad6-badd-aef739d56721" (UID: "8a5155d6-c421-4ad6-badd-aef739d56721"). InnerVolumeSpecName "ingress-nginx-token-8thn5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.651218    1656 reconciler.go:319] Volume detached for volume "ingress-nginx-token-8thn5" (UniqueName: "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-ingress-nginx-token-8thn5") on node "ingress-addon-legacy-822297" DevicePath ""
	Jul 17 21:19:22 ingress-addon-legacy-822297 kubelet[1656]: I0717 21:19:22.651302    1656 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8a5155d6-c421-4ad6-badd-aef739d56721-webhook-cert") on node "ingress-addon-legacy-822297" DevicePath ""
	
	* 
	* ==> storage-provisioner [d8fb68133e5de5788d647377888cb401b76c33cee85088135b1ed5e0b6146d37] <==
	* I0717 21:16:04.529472       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:16:04.545186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:16:04.545872       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:16:04.553562       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:16:04.553775       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-822297_4ca5b12b-d260-4732-ac8a-97fd570d3b25!
	I0717 21:16:04.554771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a1dff6d-8729-4e5a-8961-264770c7ad41", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-822297_4ca5b12b-d260-4732-ac8a-97fd570d3b25 became leader
	I0717 21:16:04.654543       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-822297_4ca5b12b-d260-4732-ac8a-97fd570d3b25!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-822297 -n ingress-addon-legacy-822297
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-822297 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- sh -c "ping -c 1 192.168.58.1": exit status 1 (253.943978ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-mdhfd): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- sh -c "ping -c 1 192.168.58.1": exit status 1 (263.194256ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-zhxtx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-810165
helpers_test.go:235: (dbg) docker inspect multinode-810165:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271",
	        "Created": "2023-07-17T21:25:52.938651272Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1199676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:25:53.248479827Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/hostname",
	        "HostsPath": "/var/lib/docker/containers/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/hosts",
	        "LogPath": "/var/lib/docker/containers/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271-json.log",
	        "Name": "/multinode-810165",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-810165:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-810165",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85f25ce8abf7fb0852a7a097aca8de1458e35b2840fa2e858a3bb6567545d35b-init/diff:/var/lib/docker/overlay2/9dd04002488337def4cdbea3f3d72ef7a2164867b83574414c8b40a7e2f88109/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85f25ce8abf7fb0852a7a097aca8de1458e35b2840fa2e858a3bb6567545d35b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85f25ce8abf7fb0852a7a097aca8de1458e35b2840fa2e858a3bb6567545d35b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85f25ce8abf7fb0852a7a097aca8de1458e35b2840fa2e858a3bb6567545d35b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-810165",
	                "Source": "/var/lib/docker/volumes/multinode-810165/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-810165",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-810165",
	                "name.minikube.sigs.k8s.io": "multinode-810165",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25ac2d4a651a1a1975fee0933d78f8b0bd7a7153011d9184e99d928a123454ac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34101"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34099"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34098"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/25ac2d4a651a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-810165": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f50b1a146f82",
	                        "multinode-810165"
	                    ],
	                    "NetworkID": "c64db558fd385b120313b8e1b32c6d0421f68a19814e8601bb4e8c6584e97b85",
	                    "EndpointID": "3762b394db13a23145f6929b0ab072a06eac0b38749c5008160106cce5f69712",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-810165 -n multinode-810165
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-810165 logs -n 25: (2.079395468s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-120222                           | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-120222 ssh -- ls                    | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-118006                           | mount-start-1-118006 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-120222 ssh -- ls                    | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-120222                           | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	| start   | -p mount-start-2-120222                           | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	| ssh     | mount-start-2-120222 ssh -- ls                    | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-120222                           | mount-start-2-120222 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	| delete  | -p mount-start-1-118006                           | mount-start-1-118006 | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:25 UTC |
	| start   | -p multinode-810165                               | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:25 UTC | 17 Jul 23 21:27 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- apply -f                   | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- rollout                    | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- get pods -o                | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- get pods -o                | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-mdhfd --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-zhxtx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-mdhfd --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-zhxtx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-mdhfd -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-zhxtx -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- get pods -o                | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-mdhfd                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC |                     |
	|         | busybox-67b7f59bb-mdhfd -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC | 17 Jul 23 21:27 UTC |
	|         | busybox-67b7f59bb-zhxtx                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-810165 -- exec                       | multinode-810165     | jenkins | v1.30.1 | 17 Jul 23 21:27 UTC |                     |
	|         | busybox-67b7f59bb-zhxtx -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:25:47
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:25:47.710708 1199225 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:25:47.710855 1199225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:25:47.710864 1199225 out.go:309] Setting ErrFile to fd 2...
	I0717 21:25:47.710870 1199225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:25:47.711163 1199225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:25:47.711619 1199225 out.go:303] Setting JSON to false
	I0717 21:25:47.712807 1199225 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22091,"bootTime":1689607057,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:25:47.712923 1199225 start.go:138] virtualization:  
	I0717 21:25:47.715272 1199225 out.go:177] * [multinode-810165] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:25:47.717733 1199225 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:25:47.719284 1199225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:25:47.717897 1199225 notify.go:220] Checking for updates...
	I0717 21:25:47.722502 1199225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:25:47.724261 1199225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:25:47.725984 1199225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:25:47.728042 1199225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:25:47.730218 1199225 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:25:47.755954 1199225 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:25:47.756056 1199225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:25:47.850608 1199225 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 21:25:47.840674535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:25:47.850711 1199225 docker.go:294] overlay module found
	I0717 21:25:47.852838 1199225 out.go:177] * Using the docker driver based on user configuration
	I0717 21:25:47.854706 1199225 start.go:298] selected driver: docker
	I0717 21:25:47.854728 1199225 start.go:880] validating driver "docker" against <nil>
	I0717 21:25:47.854743 1199225 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:25:47.855473 1199225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:25:47.922382 1199225 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 21:25:47.913189219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:25:47.922550 1199225 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:25:47.922771 1199225 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:25:47.924804 1199225 out.go:177] * Using Docker driver with root privileges
	I0717 21:25:47.926867 1199225 cni.go:84] Creating CNI manager for ""
	I0717 21:25:47.926889 1199225 cni.go:137] 0 nodes found, recommending kindnet
	I0717 21:25:47.926900 1199225 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:25:47.926912 1199225 start_flags.go:319] config:
	{Name:multinode-810165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:25:47.928847 1199225 out.go:177] * Starting control plane node multinode-810165 in cluster multinode-810165
	I0717 21:25:47.930276 1199225 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:25:47.932096 1199225 out.go:177] * Pulling base image ...
	I0717 21:25:47.933812 1199225 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:25:47.933868 1199225 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0717 21:25:47.933884 1199225 cache.go:57] Caching tarball of preloaded images
	I0717 21:25:47.933957 1199225 preload.go:174] Found /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 21:25:47.933982 1199225 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:25:47.934330 1199225 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json ...
	I0717 21:25:47.934360 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json: {Name:mk82f790a7457fa4676be5236b1012bb4cbf891b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:47.933867 1199225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:25:47.952002 1199225 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 21:25:47.952032 1199225 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 21:25:47.952051 1199225 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:25:47.952101 1199225 start.go:365] acquiring machines lock for multinode-810165: {Name:mk7ba59e4e9621008abeb07019fdcf32c0dc2b14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:25:47.952232 1199225 start.go:369] acquired machines lock for "multinode-810165" in 103.688µs
	I0717 21:25:47.952261 1199225 start.go:93] Provisioning new machine with config: &{Name:multinode-810165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:25:47.952363 1199225 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:25:47.954534 1199225 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 21:25:47.954772 1199225 start.go:159] libmachine.API.Create for "multinode-810165" (driver="docker")
	I0717 21:25:47.954797 1199225 client.go:168] LocalClient.Create starting
	I0717 21:25:47.954853 1199225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem
	I0717 21:25:47.954892 1199225 main.go:141] libmachine: Decoding PEM data...
	I0717 21:25:47.954913 1199225 main.go:141] libmachine: Parsing certificate...
	I0717 21:25:47.954969 1199225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem
	I0717 21:25:47.954994 1199225 main.go:141] libmachine: Decoding PEM data...
	I0717 21:25:47.955009 1199225 main.go:141] libmachine: Parsing certificate...
	I0717 21:25:47.955374 1199225 cli_runner.go:164] Run: docker network inspect multinode-810165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:25:47.972232 1199225 cli_runner.go:211] docker network inspect multinode-810165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:25:47.972316 1199225 network_create.go:281] running [docker network inspect multinode-810165] to gather additional debugging logs...
	I0717 21:25:47.972336 1199225 cli_runner.go:164] Run: docker network inspect multinode-810165
	W0717 21:25:47.989548 1199225 cli_runner.go:211] docker network inspect multinode-810165 returned with exit code 1
	I0717 21:25:47.989580 1199225 network_create.go:284] error running [docker network inspect multinode-810165]: docker network inspect multinode-810165: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-810165 not found
	I0717 21:25:47.989593 1199225 network_create.go:286] output of [docker network inspect multinode-810165]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-810165 not found
	
	** /stderr **
	I0717 21:25:47.989655 1199225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:25:48.011607 1199225 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28f030d3740c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:10:e7:a1:da} reservation:<nil>}
	I0717 21:25:48.012003 1199225 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000bf0530}
	I0717 21:25:48.012030 1199225 network_create.go:123] attempt to create docker network multinode-810165 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0717 21:25:48.012095 1199225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-810165 multinode-810165
	I0717 21:25:48.094867 1199225 network_create.go:107] docker network multinode-810165 192.168.58.0/24 created
	I0717 21:25:48.094901 1199225 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-810165" container
	I0717 21:25:48.094986 1199225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:25:48.117876 1199225 cli_runner.go:164] Run: docker volume create multinode-810165 --label name.minikube.sigs.k8s.io=multinode-810165 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:25:48.138014 1199225 oci.go:103] Successfully created a docker volume multinode-810165
	I0717 21:25:48.138112 1199225 cli_runner.go:164] Run: docker run --rm --name multinode-810165-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-810165 --entrypoint /usr/bin/test -v multinode-810165:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:25:48.706379 1199225 oci.go:107] Successfully prepared a docker volume multinode-810165
	I0717 21:25:48.706425 1199225 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:25:48.706445 1199225 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:25:48.706531 1199225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-810165:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:25:52.849960 1199225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-810165:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.143388629s)
	I0717 21:25:52.850000 1199225 kic.go:199] duration metric: took 4.143547 seconds to extract preloaded images to volume
	W0717 21:25:52.850154 1199225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:25:52.850273 1199225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:25:52.922610 1199225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-810165 --name multinode-810165 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-810165 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-810165 --network multinode-810165 --ip 192.168.58.2 --volume multinode-810165:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:25:53.256627 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Running}}
	I0717 21:25:53.278634 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:25:53.302545 1199225 cli_runner.go:164] Run: docker exec multinode-810165 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:25:53.381012 1199225 oci.go:144] the created container "multinode-810165" has a running status.
	I0717 21:25:53.381039 1199225 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa...
	I0717 21:25:53.950627 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 21:25:53.950715 1199225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:25:53.979153 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:25:54.002077 1199225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:25:54.002098 1199225 kic_runner.go:114] Args: [docker exec --privileged multinode-810165 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:25:54.081037 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:25:54.113361 1199225 machine.go:88] provisioning docker machine ...
	I0717 21:25:54.113392 1199225 ubuntu.go:169] provisioning hostname "multinode-810165"
	I0717 21:25:54.113459 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:54.151499 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:25:54.151969 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34101 <nil> <nil>}
	I0717 21:25:54.151983 1199225 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-810165 && echo "multinode-810165" | sudo tee /etc/hostname
	I0717 21:25:54.327808 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-810165
	
	I0717 21:25:54.327887 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:54.350028 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:25:54.350512 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34101 <nil> <nil>}
	I0717 21:25:54.350537 1199225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-810165' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-810165/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-810165' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:25:54.487151 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:25:54.487184 1199225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:25:54.487206 1199225 ubuntu.go:177] setting up certificates
	I0717 21:25:54.487222 1199225 provision.go:83] configureAuth start
	I0717 21:25:54.487283 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165
	I0717 21:25:54.514891 1199225 provision.go:138] copyHostCerts
	I0717 21:25:54.514937 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:25:54.514971 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:25:54.514982 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:25:54.515069 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:25:54.515151 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:25:54.515173 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:25:54.515178 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:25:54.515209 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:25:54.515261 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:25:54.515288 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:25:54.515296 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:25:54.515320 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:25:54.515371 1199225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.multinode-810165 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-810165]
	I0717 21:25:54.707825 1199225 provision.go:172] copyRemoteCerts
	I0717 21:25:54.707915 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:25:54.707955 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:54.728727 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:25:54.827997 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 21:25:54.828057 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:25:54.856568 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 21:25:54.856628 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:25:54.885346 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 21:25:54.885405 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 21:25:54.914065 1199225 provision.go:86] duration metric: configureAuth took 426.828467ms
	I0717 21:25:54.914090 1199225 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:25:54.914290 1199225 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:25:54.914408 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:54.932008 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:25:54.932519 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34101 <nil> <nil>}
	I0717 21:25:54.932545 1199225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:25:55.194926 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:25:55.194995 1199225 machine.go:91] provisioned docker machine in 1.081614354s
	I0717 21:25:55.195012 1199225 client.go:171] LocalClient.Create took 7.240208784s
	I0717 21:25:55.195025 1199225 start.go:167] duration metric: libmachine.API.Create for "multinode-810165" took 7.240254101s
	I0717 21:25:55.195033 1199225 start.go:300] post-start starting for "multinode-810165" (driver="docker")
	I0717 21:25:55.195043 1199225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:25:55.195173 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:25:55.195253 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:55.214771 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:25:55.312073 1199225 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:25:55.316181 1199225 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 21:25:55.316248 1199225 command_runner.go:130] > NAME="Ubuntu"
	I0717 21:25:55.316272 1199225 command_runner.go:130] > VERSION_ID="22.04"
	I0717 21:25:55.316295 1199225 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 21:25:55.316302 1199225 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 21:25:55.316307 1199225 command_runner.go:130] > ID=ubuntu
	I0717 21:25:55.316321 1199225 command_runner.go:130] > ID_LIKE=debian
	I0717 21:25:55.316333 1199225 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 21:25:55.316342 1199225 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 21:25:55.316352 1199225 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 21:25:55.316363 1199225 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 21:25:55.316368 1199225 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 21:25:55.316429 1199225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:25:55.316459 1199225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:25:55.316482 1199225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:25:55.316493 1199225 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:25:55.316504 1199225 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:25:55.316572 1199225 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:25:55.316652 1199225 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:25:55.316662 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /etc/ssl/certs/11358722.pem
	I0717 21:25:55.316772 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:25:55.327641 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:25:55.357032 1199225 start.go:303] post-start completed in 161.983889ms
	I0717 21:25:55.357505 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165
	I0717 21:25:55.375387 1199225 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json ...
	I0717 21:25:55.375672 1199225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:25:55.375722 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:55.394185 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:25:55.483309 1199225 command_runner.go:130] > 16%!
	(MISSING)I0717 21:25:55.483402 1199225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:25:55.488858 1199225 command_runner.go:130] > 165G
	I0717 21:25:55.489327 1199225 start.go:128] duration metric: createHost completed in 7.536949142s
	I0717 21:25:55.489355 1199225 start.go:83] releasing machines lock for "multinode-810165", held for 7.537112219s
	I0717 21:25:55.489436 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165
	I0717 21:25:55.506685 1199225 ssh_runner.go:195] Run: cat /version.json
	I0717 21:25:55.506745 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:55.506695 1199225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:25:55.506879 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:25:55.530771 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:25:55.537075 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:25:55.756289 1199225 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 21:25:55.756331 1199225 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	W0717 21:25:55.756431 1199225 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 21:25:55.756506 1199225 ssh_runner.go:195] Run: systemctl --version
	I0717 21:25:55.762356 1199225 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0717 21:25:55.762399 1199225 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0717 21:25:55.762533 1199225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:25:55.908607 1199225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:25:55.914491 1199225 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 21:25:55.914561 1199225 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 21:25:55.914586 1199225 command_runner.go:130] > Device: 3ah/58d	Inode: 5189919     Links: 1
	I0717 21:25:55.914594 1199225 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:25:55.914602 1199225 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 21:25:55.914609 1199225 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 21:25:55.914630 1199225 command_runner.go:130] > Change: 2023-07-17 21:03:28.884783195 +0000
	I0717 21:25:55.914644 1199225 command_runner.go:130] >  Birth: 2023-07-17 21:03:28.880783199 +0000
	I0717 21:25:55.914744 1199225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:25:55.938365 1199225 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:25:55.938466 1199225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:25:55.978534 1199225 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 21:25:55.978560 1199225 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:25:55.978571 1199225 start.go:469] detecting cgroup driver to use...
	I0717 21:25:55.978634 1199225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:25:55.978712 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:25:55.998461 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:25:56.013940 1199225 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:25:56.014041 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:25:56.030731 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:25:56.047520 1199225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:25:56.145460 1199225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:25:56.262542 1199225 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 21:25:56.262570 1199225 docker.go:212] disabling docker service ...
	I0717 21:25:56.262631 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:25:56.287456 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:25:56.302690 1199225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:25:56.399603 1199225 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 21:25:56.399712 1199225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:25:56.505017 1199225 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 21:25:56.505135 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:25:56.518587 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:25:56.537454 1199225 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 21:25:56.538605 1199225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 21:25:56.538729 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:25:56.552494 1199225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:25:56.552563 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:25:56.565588 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:25:56.578182 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:25:56.591285 1199225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:25:56.602875 1199225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:25:56.613378 1199225 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 21:25:56.613512 1199225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:25:56.624399 1199225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:25:56.714887 1199225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:25:56.846030 1199225 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:25:56.846150 1199225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:25:56.851600 1199225 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 21:25:56.851625 1199225 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 21:25:56.851633 1199225 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0717 21:25:56.851641 1199225 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:25:56.851647 1199225 command_runner.go:130] > Access: 2023-07-17 21:25:56.831269922 +0000
	I0717 21:25:56.851657 1199225 command_runner.go:130] > Modify: 2023-07-17 21:25:56.831269922 +0000
	I0717 21:25:56.851664 1199225 command_runner.go:130] > Change: 2023-07-17 21:25:56.831269922 +0000
	I0717 21:25:56.851668 1199225 command_runner.go:130] >  Birth: -
	I0717 21:25:56.851694 1199225 start.go:537] Will wait 60s for crictl version
	I0717 21:25:56.851754 1199225 ssh_runner.go:195] Run: which crictl
	I0717 21:25:56.856471 1199225 command_runner.go:130] > /usr/bin/crictl
	I0717 21:25:56.856556 1199225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:25:56.901991 1199225 command_runner.go:130] > Version:  0.1.0
	I0717 21:25:56.902262 1199225 command_runner.go:130] > RuntimeName:  cri-o
	I0717 21:25:56.902507 1199225 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 21:25:56.902708 1199225 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 21:25:56.905854 1199225 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 21:25:56.905934 1199225 ssh_runner.go:195] Run: crio --version
	I0717 21:25:56.951958 1199225 command_runner.go:130] > crio version 1.24.6
	I0717 21:25:56.951978 1199225 command_runner.go:130] > Version:          1.24.6
	I0717 21:25:56.951986 1199225 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 21:25:56.951992 1199225 command_runner.go:130] > GitTreeState:     clean
	I0717 21:25:56.951998 1199225 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 21:25:56.952003 1199225 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 21:25:56.952008 1199225 command_runner.go:130] > Compiler:         gc
	I0717 21:25:56.952014 1199225 command_runner.go:130] > Platform:         linux/arm64
	I0717 21:25:56.952020 1199225 command_runner.go:130] > Linkmode:         dynamic
	I0717 21:25:56.952035 1199225 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 21:25:56.952044 1199225 command_runner.go:130] > SeccompEnabled:   true
	I0717 21:25:56.952050 1199225 command_runner.go:130] > AppArmorEnabled:  false
	I0717 21:25:56.952155 1199225 ssh_runner.go:195] Run: crio --version
	I0717 21:25:56.996283 1199225 command_runner.go:130] > crio version 1.24.6
	I0717 21:25:56.996306 1199225 command_runner.go:130] > Version:          1.24.6
	I0717 21:25:56.996316 1199225 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 21:25:56.996321 1199225 command_runner.go:130] > GitTreeState:     clean
	I0717 21:25:56.996328 1199225 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 21:25:56.996333 1199225 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 21:25:56.996338 1199225 command_runner.go:130] > Compiler:         gc
	I0717 21:25:56.996343 1199225 command_runner.go:130] > Platform:         linux/arm64
	I0717 21:25:56.996350 1199225 command_runner.go:130] > Linkmode:         dynamic
	I0717 21:25:56.996359 1199225 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 21:25:56.996367 1199225 command_runner.go:130] > SeccompEnabled:   true
	I0717 21:25:56.996372 1199225 command_runner.go:130] > AppArmorEnabled:  false
	I0717 21:25:57.006005 1199225 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 21:25:57.008007 1199225 cli_runner.go:164] Run: docker network inspect multinode-810165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:25:57.026237 1199225 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 21:25:57.031314 1199225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:25:57.046154 1199225 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:25:57.046242 1199225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:25:57.113424 1199225 command_runner.go:130] > {
	I0717 21:25:57.113441 1199225 command_runner.go:130] >   "images": [
	I0717 21:25:57.113447 1199225 command_runner.go:130] >     {
	I0717 21:25:57.113457 1199225 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0717 21:25:57.113463 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.113470 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 21:25:57.113475 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113480 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.113498 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0717 21:25:57.113508 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0717 21:25:57.113515 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113521 1199225 command_runner.go:130] >       "size": "60881430",
	I0717 21:25:57.113528 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.113536 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.113551 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.113558 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.113562 1199225 command_runner.go:130] >     },
	I0717 21:25:57.113577 1199225 command_runner.go:130] >     {
	I0717 21:25:57.113587 1199225 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0717 21:25:57.113597 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.113613 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 21:25:57.113626 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113640 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.113655 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0717 21:25:57.113672 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0717 21:25:57.113677 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113684 1199225 command_runner.go:130] >       "size": "29037500",
	I0717 21:25:57.113690 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.113697 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.113706 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.113711 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.113716 1199225 command_runner.go:130] >     },
	I0717 21:25:57.113720 1199225 command_runner.go:130] >     {
	I0717 21:25:57.113728 1199225 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0717 21:25:57.113742 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.113749 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 21:25:57.113756 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113764 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.113774 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0717 21:25:57.113786 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0717 21:25:57.113791 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113797 1199225 command_runner.go:130] >       "size": "51393451",
	I0717 21:25:57.113802 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.113807 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.113812 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.113820 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.113826 1199225 command_runner.go:130] >     },
	I0717 21:25:57.113831 1199225 command_runner.go:130] >     {
	I0717 21:25:57.113848 1199225 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0717 21:25:57.113853 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.113859 1199225 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 21:25:57.113866 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113871 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.113880 1199225 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0717 21:25:57.113892 1199225 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0717 21:25:57.113904 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113912 1199225 command_runner.go:130] >       "size": "182283991",
	I0717 21:25:57.113917 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.113922 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.113926 1199225 command_runner.go:130] >       },
	I0717 21:25:57.113933 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.113940 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.113950 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.113955 1199225 command_runner.go:130] >     },
	I0717 21:25:57.113959 1199225 command_runner.go:130] >     {
	I0717 21:25:57.113967 1199225 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0717 21:25:57.113973 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.113982 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 21:25:57.113988 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.113994 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.114005 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0717 21:25:57.114014 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 21:25:57.114023 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114029 1199225 command_runner.go:130] >       "size": "116204496",
	I0717 21:25:57.114039 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.114044 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.114048 1199225 command_runner.go:130] >       },
	I0717 21:25:57.114053 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.114058 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.114063 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.114069 1199225 command_runner.go:130] >     },
	I0717 21:25:57.114074 1199225 command_runner.go:130] >     {
	I0717 21:25:57.114083 1199225 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0717 21:25:57.114092 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.114101 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 21:25:57.114107 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114112 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.114123 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0717 21:25:57.114133 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0717 21:25:57.114139 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114146 1199225 command_runner.go:130] >       "size": "108667702",
	I0717 21:25:57.114151 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.114160 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.114167 1199225 command_runner.go:130] >       },
	I0717 21:25:57.114172 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.114177 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.114183 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.114190 1199225 command_runner.go:130] >     },
	I0717 21:25:57.114194 1199225 command_runner.go:130] >     {
	I0717 21:25:57.114202 1199225 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0717 21:25:57.114210 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.114215 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 21:25:57.114220 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114225 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.114234 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0717 21:25:57.114245 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 21:25:57.114253 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114258 1199225 command_runner.go:130] >       "size": "68099991",
	I0717 21:25:57.114264 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.114269 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.114276 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.114281 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.114285 1199225 command_runner.go:130] >     },
	I0717 21:25:57.114292 1199225 command_runner.go:130] >     {
	I0717 21:25:57.114299 1199225 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0717 21:25:57.114304 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.114311 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 21:25:57.114316 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114322 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.114366 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0717 21:25:57.114379 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 21:25:57.114383 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114388 1199225 command_runner.go:130] >       "size": "57615158",
	I0717 21:25:57.114393 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.114398 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.114402 1199225 command_runner.go:130] >       },
	I0717 21:25:57.114407 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.114412 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.114417 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.114421 1199225 command_runner.go:130] >     },
	I0717 21:25:57.114425 1199225 command_runner.go:130] >     {
	I0717 21:25:57.114433 1199225 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0717 21:25:57.114437 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.114444 1199225 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 21:25:57.114450 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114456 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.114473 1199225 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0717 21:25:57.114482 1199225 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0717 21:25:57.114488 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.114494 1199225 command_runner.go:130] >       "size": "520014",
	I0717 21:25:57.114498 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.114506 1199225 command_runner.go:130] >         "value": "65535"
	I0717 21:25:57.114511 1199225 command_runner.go:130] >       },
	I0717 21:25:57.114516 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.114521 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.114529 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.114535 1199225 command_runner.go:130] >     }
	I0717 21:25:57.114539 1199225 command_runner.go:130] >   ]
	I0717 21:25:57.114545 1199225 command_runner.go:130] > }
	I0717 21:25:57.117581 1199225 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:25:57.117603 1199225 crio.go:415] Images already preloaded, skipping extraction
	I0717 21:25:57.117664 1199225 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:25:57.159116 1199225 command_runner.go:130] > {
	I0717 21:25:57.159133 1199225 command_runner.go:130] >   "images": [
	I0717 21:25:57.159138 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159148 1199225 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0717 21:25:57.159153 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159160 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 21:25:57.159165 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159170 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159180 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0717 21:25:57.159189 1199225 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0717 21:25:57.159194 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159199 1199225 command_runner.go:130] >       "size": "60881430",
	I0717 21:25:57.159204 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.159225 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159234 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159239 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159243 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159248 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159255 1199225 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0717 21:25:57.159260 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159266 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 21:25:57.159270 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159275 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159284 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0717 21:25:57.159294 1199225 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0717 21:25:57.159305 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159313 1199225 command_runner.go:130] >       "size": "29037500",
	I0717 21:25:57.159318 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.159322 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159327 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159332 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159337 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159341 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159349 1199225 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0717 21:25:57.159354 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159360 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 21:25:57.159364 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159368 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159378 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0717 21:25:57.159387 1199225 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0717 21:25:57.159391 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159396 1199225 command_runner.go:130] >       "size": "51393451",
	I0717 21:25:57.159401 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.159406 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159411 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159417 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159421 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159425 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159433 1199225 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0717 21:25:57.159439 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159445 1199225 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 21:25:57.159449 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159454 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159462 1199225 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0717 21:25:57.159471 1199225 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0717 21:25:57.159479 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159484 1199225 command_runner.go:130] >       "size": "182283991",
	I0717 21:25:57.159489 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.159493 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.159497 1199225 command_runner.go:130] >       },
	I0717 21:25:57.159502 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159507 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159511 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159515 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159520 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159527 1199225 command_runner.go:130] >       "id": "39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473",
	I0717 21:25:57.159532 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159539 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 21:25:57.159543 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159548 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159557 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090",
	I0717 21:25:57.159566 1199225 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 21:25:57.159570 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159575 1199225 command_runner.go:130] >       "size": "116204496",
	I0717 21:25:57.159579 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.159584 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.159588 1199225 command_runner.go:130] >       },
	I0717 21:25:57.159592 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159597 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159602 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159606 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159609 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159618 1199225 command_runner.go:130] >       "id": "ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8",
	I0717 21:25:57.159623 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159629 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 21:25:57.159635 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159640 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159649 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0",
	I0717 21:25:57.159659 1199225 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"
	I0717 21:25:57.159663 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159669 1199225 command_runner.go:130] >       "size": "108667702",
	I0717 21:25:57.159673 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.159678 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.159682 1199225 command_runner.go:130] >       },
	I0717 21:25:57.159687 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159691 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159696 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159700 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159704 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159711 1199225 command_runner.go:130] >       "id": "fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a",
	I0717 21:25:57.159715 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159721 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 21:25:57.159726 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159732 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159741 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53",
	I0717 21:25:57.159750 1199225 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 21:25:57.159754 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159758 1199225 command_runner.go:130] >       "size": "68099991",
	I0717 21:25:57.159763 1199225 command_runner.go:130] >       "uid": null,
	I0717 21:25:57.159768 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159772 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159777 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159781 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159785 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159792 1199225 command_runner.go:130] >       "id": "bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540",
	I0717 21:25:57.159797 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159803 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 21:25:57.159807 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159811 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159849 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf",
	I0717 21:25:57.159859 1199225 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 21:25:57.159865 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159870 1199225 command_runner.go:130] >       "size": "57615158",
	I0717 21:25:57.159874 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.159879 1199225 command_runner.go:130] >         "value": "0"
	I0717 21:25:57.159883 1199225 command_runner.go:130] >       },
	I0717 21:25:57.159888 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159892 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159897 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159901 1199225 command_runner.go:130] >     },
	I0717 21:25:57.159905 1199225 command_runner.go:130] >     {
	I0717 21:25:57.159912 1199225 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0717 21:25:57.159917 1199225 command_runner.go:130] >       "repoTags": [
	I0717 21:25:57.159922 1199225 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 21:25:57.159927 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159931 1199225 command_runner.go:130] >       "repoDigests": [
	I0717 21:25:57.159940 1199225 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0717 21:25:57.159950 1199225 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0717 21:25:57.159954 1199225 command_runner.go:130] >       ],
	I0717 21:25:57.159960 1199225 command_runner.go:130] >       "size": "520014",
	I0717 21:25:57.159964 1199225 command_runner.go:130] >       "uid": {
	I0717 21:25:57.159969 1199225 command_runner.go:130] >         "value": "65535"
	I0717 21:25:57.159973 1199225 command_runner.go:130] >       },
	I0717 21:25:57.159978 1199225 command_runner.go:130] >       "username": "",
	I0717 21:25:57.159982 1199225 command_runner.go:130] >       "spec": null,
	I0717 21:25:57.159987 1199225 command_runner.go:130] >       "pinned": false
	I0717 21:25:57.159991 1199225 command_runner.go:130] >     }
	I0717 21:25:57.159995 1199225 command_runner.go:130] >   ]
	I0717 21:25:57.159999 1199225 command_runner.go:130] > }
	I0717 21:25:57.161684 1199225 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:25:57.161702 1199225 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:25:57.161779 1199225 ssh_runner.go:195] Run: crio config
	I0717 21:25:57.210464 1199225 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 21:25:57.210543 1199225 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 21:25:57.210576 1199225 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 21:25:57.210611 1199225 command_runner.go:130] > #
	I0717 21:25:57.210647 1199225 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 21:25:57.210672 1199225 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 21:25:57.210696 1199225 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 21:25:57.210737 1199225 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 21:25:57.210777 1199225 command_runner.go:130] > # reload'.
	I0717 21:25:57.210802 1199225 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 21:25:57.210825 1199225 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 21:25:57.210858 1199225 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 21:25:57.210880 1199225 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 21:25:57.210899 1199225 command_runner.go:130] > [crio]
	I0717 21:25:57.210932 1199225 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 21:25:57.210952 1199225 command_runner.go:130] > # containers images, in this directory.
	I0717 21:25:57.210978 1199225 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 21:25:57.211000 1199225 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 21:25:57.211030 1199225 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 21:25:57.211056 1199225 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 21:25:57.211078 1199225 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 21:25:57.211413 1199225 command_runner.go:130] > # storage_driver = "vfs"
	I0717 21:25:57.211427 1199225 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 21:25:57.211435 1199225 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 21:25:57.211440 1199225 command_runner.go:130] > # storage_option = [
	I0717 21:25:57.211444 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.211456 1199225 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 21:25:57.211463 1199225 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 21:25:57.211877 1199225 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 21:25:57.211895 1199225 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 21:25:57.211906 1199225 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 21:25:57.211913 1199225 command_runner.go:130] > # always happen on a node reboot
	I0717 21:25:57.211919 1199225 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 21:25:57.211925 1199225 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 21:25:57.211933 1199225 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 21:25:57.211943 1199225 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 21:25:57.211949 1199225 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 21:25:57.211958 1199225 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 21:25:57.211971 1199225 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 21:25:57.211976 1199225 command_runner.go:130] > # internal_wipe = true
	I0717 21:25:57.211983 1199225 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 21:25:57.211990 1199225 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 21:25:57.211997 1199225 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 21:25:57.212003 1199225 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 21:25:57.212020 1199225 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 21:25:57.212024 1199225 command_runner.go:130] > [crio.api]
	I0717 21:25:57.212031 1199225 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 21:25:57.212037 1199225 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 21:25:57.212043 1199225 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 21:25:57.212048 1199225 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 21:25:57.212057 1199225 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 21:25:57.212065 1199225 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 21:25:57.212070 1199225 command_runner.go:130] > # stream_port = "0"
	I0717 21:25:57.212076 1199225 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 21:25:57.212081 1199225 command_runner.go:130] > # stream_enable_tls = false
	I0717 21:25:57.212088 1199225 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 21:25:57.212093 1199225 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 21:25:57.212101 1199225 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 21:25:57.212108 1199225 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 21:25:57.212113 1199225 command_runner.go:130] > # minutes.
	I0717 21:25:57.212118 1199225 command_runner.go:130] > # stream_tls_cert = ""
	I0717 21:25:57.212125 1199225 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 21:25:57.212136 1199225 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 21:25:57.212141 1199225 command_runner.go:130] > # stream_tls_key = ""
	I0717 21:25:57.212148 1199225 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 21:25:57.212155 1199225 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 21:25:57.212163 1199225 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 21:25:57.212168 1199225 command_runner.go:130] > # stream_tls_ca = ""
	I0717 21:25:57.212180 1199225 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 21:25:57.212191 1199225 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 21:25:57.212203 1199225 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 21:25:57.212211 1199225 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 21:25:57.212239 1199225 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 21:25:57.212246 1199225 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 21:25:57.212250 1199225 command_runner.go:130] > [crio.runtime]
	I0717 21:25:57.212271 1199225 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 21:25:57.212282 1199225 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 21:25:57.212288 1199225 command_runner.go:130] > # "nofile=1024:2048"
	I0717 21:25:57.212304 1199225 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 21:25:57.212309 1199225 command_runner.go:130] > # default_ulimits = [
	I0717 21:25:57.212313 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212320 1199225 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 21:25:57.212331 1199225 command_runner.go:130] > # no_pivot = false
	I0717 21:25:57.212342 1199225 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 21:25:57.212350 1199225 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 21:25:57.212356 1199225 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 21:25:57.212363 1199225 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 21:25:57.212370 1199225 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 21:25:57.212383 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 21:25:57.212388 1199225 command_runner.go:130] > # conmon = ""
	I0717 21:25:57.212393 1199225 command_runner.go:130] > # Cgroup setting for conmon
	I0717 21:25:57.212401 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 21:25:57.212406 1199225 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 21:25:57.212413 1199225 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 21:25:57.212422 1199225 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 21:25:57.212433 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 21:25:57.212438 1199225 command_runner.go:130] > # conmon_env = [
	I0717 21:25:57.212442 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212449 1199225 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 21:25:57.212457 1199225 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 21:25:57.212467 1199225 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 21:25:57.212471 1199225 command_runner.go:130] > # default_env = [
	I0717 21:25:57.212475 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212482 1199225 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 21:25:57.212488 1199225 command_runner.go:130] > # selinux = false
	I0717 21:25:57.212496 1199225 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 21:25:57.212508 1199225 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 21:25:57.212516 1199225 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 21:25:57.212520 1199225 command_runner.go:130] > # seccomp_profile = ""
	I0717 21:25:57.212527 1199225 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 21:25:57.212534 1199225 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 21:25:57.212541 1199225 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 21:25:57.212547 1199225 command_runner.go:130] > # which might increase security.
	I0717 21:25:57.212553 1199225 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 21:25:57.212564 1199225 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 21:25:57.212571 1199225 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 21:25:57.212578 1199225 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 21:25:57.212589 1199225 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 21:25:57.212595 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:25:57.212601 1199225 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 21:25:57.212609 1199225 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 21:25:57.212614 1199225 command_runner.go:130] > # the cgroup blockio controller.
	I0717 21:25:57.212624 1199225 command_runner.go:130] > # blockio_config_file = ""
	I0717 21:25:57.212637 1199225 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 21:25:57.212642 1199225 command_runner.go:130] > # irqbalance daemon.
	I0717 21:25:57.212649 1199225 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 21:25:57.212659 1199225 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 21:25:57.212669 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:25:57.212674 1199225 command_runner.go:130] > # rdt_config_file = ""
	I0717 21:25:57.212680 1199225 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 21:25:57.212685 1199225 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 21:25:57.212692 1199225 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 21:25:57.212697 1199225 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 21:25:57.212708 1199225 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 21:25:57.212718 1199225 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 21:25:57.212723 1199225 command_runner.go:130] > # will be added.
	I0717 21:25:57.212735 1199225 command_runner.go:130] > # default_capabilities = [
	I0717 21:25:57.212740 1199225 command_runner.go:130] > # 	"CHOWN",
	I0717 21:25:57.212747 1199225 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 21:25:57.212751 1199225 command_runner.go:130] > # 	"FSETID",
	I0717 21:25:57.212756 1199225 command_runner.go:130] > # 	"FOWNER",
	I0717 21:25:57.212762 1199225 command_runner.go:130] > # 	"SETGID",
	I0717 21:25:57.212766 1199225 command_runner.go:130] > # 	"SETUID",
	I0717 21:25:57.212772 1199225 command_runner.go:130] > # 	"SETPCAP",
	I0717 21:25:57.212777 1199225 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 21:25:57.212784 1199225 command_runner.go:130] > # 	"KILL",
	I0717 21:25:57.212788 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212797 1199225 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 21:25:57.212805 1199225 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 21:25:57.212813 1199225 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 21:25:57.212824 1199225 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 21:25:57.212831 1199225 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 21:25:57.212835 1199225 command_runner.go:130] > # default_sysctls = [
	I0717 21:25:57.212839 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212845 1199225 command_runner.go:130] > # List of devices on the host that a
	I0717 21:25:57.212852 1199225 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 21:25:57.212857 1199225 command_runner.go:130] > # allowed_devices = [
	I0717 21:25:57.212861 1199225 command_runner.go:130] > # 	"/dev/fuse",
	I0717 21:25:57.212865 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212875 1199225 command_runner.go:130] > # List of additional devices. specified as
	I0717 21:25:57.212921 1199225 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 21:25:57.212928 1199225 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 21:25:57.212935 1199225 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 21:25:57.212942 1199225 command_runner.go:130] > # additional_devices = [
	I0717 21:25:57.212946 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212952 1199225 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 21:25:57.212958 1199225 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 21:25:57.212962 1199225 command_runner.go:130] > # 	"/etc/cdi",
	I0717 21:25:57.212972 1199225 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 21:25:57.212976 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.212983 1199225 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 21:25:57.212997 1199225 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 21:25:57.213004 1199225 command_runner.go:130] > # Defaults to false.
	I0717 21:25:57.213011 1199225 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 21:25:57.213018 1199225 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 21:25:57.213026 1199225 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 21:25:57.213030 1199225 command_runner.go:130] > # hooks_dir = [
	I0717 21:25:57.213037 1199225 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 21:25:57.213041 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.213051 1199225 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 21:25:57.213062 1199225 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 21:25:57.213068 1199225 command_runner.go:130] > # its default mounts from the following two files:
	I0717 21:25:57.213072 1199225 command_runner.go:130] > #
	I0717 21:25:57.213086 1199225 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 21:25:57.213093 1199225 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 21:25:57.213101 1199225 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 21:25:57.213104 1199225 command_runner.go:130] > #
	I0717 21:25:57.213112 1199225 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 21:25:57.213119 1199225 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 21:25:57.213127 1199225 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 21:25:57.213133 1199225 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 21:25:57.213137 1199225 command_runner.go:130] > #
	I0717 21:25:57.213145 1199225 command_runner.go:130] > # default_mounts_file = ""
	I0717 21:25:57.213164 1199225 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 21:25:57.213173 1199225 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 21:25:57.215629 1199225 command_runner.go:130] > # pids_limit = 0
	I0717 21:25:57.215665 1199225 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 21:25:57.215700 1199225 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 21:25:57.215721 1199225 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 21:25:57.215737 1199225 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 21:25:57.215745 1199225 command_runner.go:130] > # log_size_max = -1
	I0717 21:25:57.215769 1199225 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 21:25:57.215783 1199225 command_runner.go:130] > # log_to_journald = false
	I0717 21:25:57.215806 1199225 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 21:25:57.215820 1199225 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 21:25:57.215828 1199225 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 21:25:57.215839 1199225 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 21:25:57.215860 1199225 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 21:25:57.215872 1199225 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 21:25:57.215886 1199225 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 21:25:57.215894 1199225 command_runner.go:130] > # read_only = false
	I0717 21:25:57.215905 1199225 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 21:25:57.215929 1199225 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 21:25:57.215939 1199225 command_runner.go:130] > # live configuration reload.
	I0717 21:25:57.215944 1199225 command_runner.go:130] > # log_level = "info"
	I0717 21:25:57.215952 1199225 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 21:25:57.215965 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:25:57.215970 1199225 command_runner.go:130] > # log_filter = ""
	I0717 21:25:57.215980 1199225 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 21:25:57.215995 1199225 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 21:25:57.216001 1199225 command_runner.go:130] > # separated by comma.
	I0717 21:25:57.216006 1199225 command_runner.go:130] > # uid_mappings = ""
	I0717 21:25:57.216020 1199225 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 21:25:57.216029 1199225 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 21:25:57.216037 1199225 command_runner.go:130] > # separated by comma.
	I0717 21:25:57.216042 1199225 command_runner.go:130] > # gid_mappings = ""
	I0717 21:25:57.216059 1199225 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 21:25:57.216072 1199225 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 21:25:57.216088 1199225 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 21:25:57.216098 1199225 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 21:25:57.216108 1199225 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 21:25:57.216122 1199225 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 21:25:57.216139 1199225 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 21:25:57.216145 1199225 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 21:25:57.216156 1199225 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 21:25:57.216166 1199225 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 21:25:57.216176 1199225 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 21:25:57.216185 1199225 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 21:25:57.216195 1199225 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 21:25:57.216251 1199225 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 21:25:57.216269 1199225 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 21:25:57.216276 1199225 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 21:25:57.216281 1199225 command_runner.go:130] > # drop_infra_ctr = true
	I0717 21:25:57.216298 1199225 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 21:25:57.216306 1199225 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 21:25:57.216319 1199225 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 21:25:57.216334 1199225 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 21:25:57.216343 1199225 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 21:25:57.216357 1199225 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 21:25:57.216367 1199225 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 21:25:57.216379 1199225 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 21:25:57.216385 1199225 command_runner.go:130] > # pinns_path = ""
	I0717 21:25:57.216396 1199225 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 21:25:57.216409 1199225 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 21:25:57.216432 1199225 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 21:25:57.216442 1199225 command_runner.go:130] > # default_runtime = "runc"
	I0717 21:25:57.216448 1199225 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 21:25:57.216466 1199225 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 21:25:57.216478 1199225 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 21:25:57.216491 1199225 command_runner.go:130] > # creation as a file is not desired either.
	I0717 21:25:57.216503 1199225 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 21:25:57.216513 1199225 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 21:25:57.216523 1199225 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 21:25:57.216530 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.216543 1199225 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 21:25:57.216555 1199225 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 21:25:57.216568 1199225 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 21:25:57.216581 1199225 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 21:25:57.216587 1199225 command_runner.go:130] > #
	I0717 21:25:57.216595 1199225 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 21:25:57.216611 1199225 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 21:25:57.216629 1199225 command_runner.go:130] > #  runtime_type = "oci"
	I0717 21:25:57.216640 1199225 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 21:25:57.216650 1199225 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 21:25:57.216655 1199225 command_runner.go:130] > #  allowed_annotations = []
	I0717 21:25:57.216662 1199225 command_runner.go:130] > # Where:
	I0717 21:25:57.216673 1199225 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 21:25:57.216689 1199225 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 21:25:57.216698 1199225 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 21:25:57.216712 1199225 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 21:25:57.216716 1199225 command_runner.go:130] > #   in $PATH.
	I0717 21:25:57.216728 1199225 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 21:25:57.216744 1199225 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 21:25:57.216752 1199225 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 21:25:57.216774 1199225 command_runner.go:130] > #   state.
	I0717 21:25:57.216796 1199225 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 21:25:57.216806 1199225 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 21:25:57.216819 1199225 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 21:25:57.216829 1199225 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 21:25:57.216837 1199225 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 21:25:57.216853 1199225 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 21:25:57.216862 1199225 command_runner.go:130] > #   The currently recognized values are:
	I0717 21:25:57.216874 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 21:25:57.216886 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 21:25:57.216895 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 21:25:57.216917 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 21:25:57.216930 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 21:25:57.216949 1199225 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 21:25:57.216960 1199225 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 21:25:57.216973 1199225 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 21:25:57.216985 1199225 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 21:25:57.216991 1199225 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 21:25:57.217004 1199225 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 21:25:57.217015 1199225 command_runner.go:130] > runtime_type = "oci"
	I0717 21:25:57.217021 1199225 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 21:25:57.217028 1199225 command_runner.go:130] > runtime_config_path = ""
	I0717 21:25:57.217033 1199225 command_runner.go:130] > monitor_path = ""
	I0717 21:25:57.217043 1199225 command_runner.go:130] > monitor_cgroup = ""
	I0717 21:25:57.217052 1199225 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 21:25:57.217116 1199225 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 21:25:57.217125 1199225 command_runner.go:130] > # running containers
	I0717 21:25:57.217142 1199225 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 21:25:57.217182 1199225 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 21:25:57.217204 1199225 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 21:25:57.217220 1199225 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 21:25:57.217232 1199225 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 21:25:57.217238 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 21:25:57.217255 1199225 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 21:25:57.217260 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 21:25:57.217267 1199225 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 21:25:57.217278 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 21:25:57.217297 1199225 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 21:25:57.217307 1199225 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 21:25:57.217325 1199225 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 21:25:57.217342 1199225 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 21:25:57.217362 1199225 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 21:25:57.217374 1199225 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 21:25:57.217390 1199225 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 21:25:57.217405 1199225 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 21:25:57.217422 1199225 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 21:25:57.217436 1199225 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 21:25:57.217443 1199225 command_runner.go:130] > # Example:
	I0717 21:25:57.217454 1199225 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 21:25:57.217462 1199225 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 21:25:57.217469 1199225 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 21:25:57.217481 1199225 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 21:25:57.217486 1199225 command_runner.go:130] > # cpuset = 0
	I0717 21:25:57.217493 1199225 command_runner.go:130] > # cpushares = "0-1"
	I0717 21:25:57.217504 1199225 command_runner.go:130] > # Where:
	I0717 21:25:57.217513 1199225 command_runner.go:130] > # The workload name is workload-type.
	I0717 21:25:57.217525 1199225 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 21:25:57.217534 1199225 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 21:25:57.217546 1199225 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 21:25:57.217566 1199225 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 21:25:57.217581 1199225 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 21:25:57.217587 1199225 command_runner.go:130] > # 
	I0717 21:25:57.217595 1199225 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 21:25:57.217601 1199225 command_runner.go:130] > #
	I0717 21:25:57.217614 1199225 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 21:25:57.217630 1199225 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 21:25:57.217647 1199225 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 21:25:57.217657 1199225 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 21:25:57.217665 1199225 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 21:25:57.217676 1199225 command_runner.go:130] > [crio.image]
	I0717 21:25:57.217684 1199225 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 21:25:57.217693 1199225 command_runner.go:130] > # default_transport = "docker://"
	I0717 21:25:57.217713 1199225 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 21:25:57.217724 1199225 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 21:25:57.217730 1199225 command_runner.go:130] > # global_auth_file = ""
	I0717 21:25:57.217743 1199225 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 21:25:57.217753 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:25:57.217762 1199225 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 21:25:57.217778 1199225 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 21:25:57.217788 1199225 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 21:25:57.217794 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:25:57.218889 1199225 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 21:25:57.218907 1199225 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 21:25:57.218922 1199225 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 21:25:57.218930 1199225 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 21:25:57.218949 1199225 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 21:25:57.218958 1199225 command_runner.go:130] > # pause_command = "/pause"
	I0717 21:25:57.218966 1199225 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 21:25:57.218982 1199225 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 21:25:57.218990 1199225 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 21:25:57.219007 1199225 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 21:25:57.219022 1199225 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 21:25:57.219027 1199225 command_runner.go:130] > # signature_policy = ""
	I0717 21:25:57.219039 1199225 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 21:25:57.219051 1199225 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 21:25:57.219061 1199225 command_runner.go:130] > # changing them here.
	I0717 21:25:57.219073 1199225 command_runner.go:130] > # insecure_registries = [
	I0717 21:25:57.219081 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.219093 1199225 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 21:25:57.219099 1199225 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 21:25:57.219111 1199225 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 21:25:57.219121 1199225 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 21:25:57.219133 1199225 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 21:25:57.219141 1199225 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 21:25:57.219146 1199225 command_runner.go:130] > # CNI plugins.
	I0717 21:25:57.219163 1199225 command_runner.go:130] > [crio.network]
	I0717 21:25:57.219176 1199225 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 21:25:57.219182 1199225 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 21:25:57.219197 1199225 command_runner.go:130] > # cni_default_network = ""
	I0717 21:25:57.219209 1199225 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 21:25:57.219222 1199225 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 21:25:57.219232 1199225 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 21:25:57.219241 1199225 command_runner.go:130] > # plugin_dirs = [
	I0717 21:25:57.219245 1199225 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 21:25:57.219250 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.219261 1199225 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 21:25:57.219269 1199225 command_runner.go:130] > [crio.metrics]
	I0717 21:25:57.219275 1199225 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 21:25:57.219282 1199225 command_runner.go:130] > # enable_metrics = false
	I0717 21:25:57.219295 1199225 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 21:25:57.219308 1199225 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 21:25:57.219315 1199225 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 21:25:57.219329 1199225 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 21:25:57.219343 1199225 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 21:25:57.219352 1199225 command_runner.go:130] > # metrics_collectors = [
	I0717 21:25:57.219362 1199225 command_runner.go:130] > # 	"operations",
	I0717 21:25:57.219376 1199225 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 21:25:57.219385 1199225 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 21:25:57.219390 1199225 command_runner.go:130] > # 	"operations_errors",
	I0717 21:25:57.219398 1199225 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 21:25:57.219403 1199225 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 21:25:57.219413 1199225 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 21:25:57.219421 1199225 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 21:25:57.219426 1199225 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 21:25:57.219435 1199225 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 21:25:57.219443 1199225 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 21:25:57.219456 1199225 command_runner.go:130] > # 	"containers_oom_total",
	I0717 21:25:57.219461 1199225 command_runner.go:130] > # 	"containers_oom",
	I0717 21:25:57.219466 1199225 command_runner.go:130] > # 	"processes_defunct",
	I0717 21:25:57.219474 1199225 command_runner.go:130] > # 	"operations_total",
	I0717 21:25:57.219485 1199225 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 21:25:57.219494 1199225 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 21:25:57.219500 1199225 command_runner.go:130] > # 	"operations_errors_total",
	I0717 21:25:57.219508 1199225 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 21:25:57.219520 1199225 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 21:25:57.219533 1199225 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 21:25:57.219539 1199225 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 21:25:57.219550 1199225 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 21:25:57.219562 1199225 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 21:25:57.219566 1199225 command_runner.go:130] > # ]
	I0717 21:25:57.219575 1199225 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 21:25:57.219583 1199225 command_runner.go:130] > # metrics_port = 9090
	I0717 21:25:57.219592 1199225 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 21:25:57.219600 1199225 command_runner.go:130] > # metrics_socket = ""
	I0717 21:25:57.219606 1199225 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 21:25:57.219620 1199225 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 21:25:57.219633 1199225 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 21:25:57.219642 1199225 command_runner.go:130] > # certificate on any modification event.
	I0717 21:25:57.219647 1199225 command_runner.go:130] > # metrics_cert = ""
	I0717 21:25:57.219658 1199225 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 21:25:57.219666 1199225 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 21:25:57.219671 1199225 command_runner.go:130] > # metrics_key = ""
	I0717 21:25:57.219692 1199225 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 21:25:57.219707 1199225 command_runner.go:130] > [crio.tracing]
	I0717 21:25:57.219714 1199225 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 21:25:57.219722 1199225 command_runner.go:130] > # enable_tracing = false
	I0717 21:25:57.219732 1199225 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 21:25:57.219741 1199225 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 21:25:57.219747 1199225 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 21:25:57.219755 1199225 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 21:25:57.219767 1199225 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 21:25:57.219774 1199225 command_runner.go:130] > [crio.stats]
	I0717 21:25:57.219781 1199225 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 21:25:57.219794 1199225 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 21:25:57.219802 1199225 command_runner.go:130] > # stats_collection_period = 0
	I0717 21:25:57.219834 1199225 command_runner.go:130] ! time="2023-07-17 21:25:57.207901339Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 21:25:57.219852 1199225 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 21:25:57.219945 1199225 cni.go:84] Creating CNI manager for ""
	I0717 21:25:57.219959 1199225 cni.go:137] 1 nodes found, recommending kindnet
	I0717 21:25:57.219976 1199225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:25:57.220001 1199225 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-810165 NodeName:multinode-810165 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:25:57.220219 1199225 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-810165"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:25:57.220322 1199225 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-810165 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:25:57.220416 1199225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:25:57.233126 1199225 command_runner.go:130] > kubeadm
	I0717 21:25:57.233147 1199225 command_runner.go:130] > kubectl
	I0717 21:25:57.233165 1199225 command_runner.go:130] > kubelet
	I0717 21:25:57.233210 1199225 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:25:57.233295 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:25:57.244319 1199225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0717 21:25:57.266048 1199225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:25:57.287171 1199225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0717 21:25:57.308956 1199225 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:25:57.313695 1199225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:25:57.327131 1199225 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165 for IP: 192.168.58.2
	I0717 21:25:57.327164 1199225 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e5c72a7d7e3f9ffe23960b258dcb0da4448fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:57.327351 1199225 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key
	I0717 21:25:57.327406 1199225 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key
	I0717 21:25:57.327454 1199225 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key
	I0717 21:25:57.327470 1199225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt with IP's: []
	I0717 21:25:57.567156 1199225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt ...
	I0717 21:25:57.567185 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt: {Name:mkd9194d5a99ecb3c99a5ba88536a0848dc63199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:57.567387 1199225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key ...
	I0717 21:25:57.567400 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key: {Name:mka41a07713432a1b5c92cd06f4ad8150758810b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:57.567492 1199225 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key.cee25041
	I0717 21:25:57.567509 1199225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:25:57.943170 1199225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt.cee25041 ...
	I0717 21:25:57.943201 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt.cee25041: {Name:mkdf370ebdb99b146243f172d80fad87288fe90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:57.943401 1199225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key.cee25041 ...
	I0717 21:25:57.943414 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key.cee25041: {Name:mk271a73d0afc9cd4822349c4c8e519a15ae1c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:57.943496 1199225 certs.go:337] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt
	I0717 21:25:57.943574 1199225 certs.go:341] copying /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key
	I0717 21:25:57.943635 1199225 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.key
	I0717 21:25:57.943653 1199225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.crt with IP's: []
	I0717 21:25:58.383755 1199225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.crt ...
	I0717 21:25:58.383784 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.crt: {Name:mk5dc438389b560bc9a48e6b101eaeffdba68a35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:58.383974 1199225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.key ...
	I0717 21:25:58.383986 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.key: {Name:mk0b938adfa7ceedd2fc5a20f7517df113c444dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:25:58.384067 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 21:25:58.384087 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 21:25:58.384099 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 21:25:58.384110 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 21:25:58.384124 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 21:25:58.384141 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 21:25:58.384155 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 21:25:58.384170 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 21:25:58.384224 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem (1338 bytes)
	W0717 21:25:58.384266 1199225 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872_empty.pem, impossibly tiny 0 bytes
	I0717 21:25:58.384278 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:25:58.384304 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:25:58.384334 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:25:58.384361 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem (1675 bytes)
	I0717 21:25:58.384413 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:25:58.384443 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /usr/share/ca-certificates/11358722.pem
	I0717 21:25:58.384461 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:25:58.384472 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem -> /usr/share/ca-certificates/1135872.pem
	I0717 21:25:58.385019 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:25:58.415389 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 21:25:58.444253 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:25:58.472964 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 21:25:58.502972 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:25:58.534586 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:25:58.563966 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:25:58.593116 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:25:58.623148 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /usr/share/ca-certificates/11358722.pem (1708 bytes)
	I0717 21:25:58.652260 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:25:58.680804 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem --> /usr/share/ca-certificates/1135872.pem (1338 bytes)
	I0717 21:25:58.709806 1199225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:25:58.731107 1199225 ssh_runner.go:195] Run: openssl version
	I0717 21:25:58.738125 1199225 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 21:25:58.738560 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11358722.pem && ln -fs /usr/share/ca-certificates/11358722.pem /etc/ssl/certs/11358722.pem"
	I0717 21:25:58.751696 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11358722.pem
	I0717 21:25:58.756338 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:10 /usr/share/ca-certificates/11358722.pem
	I0717 21:25:58.756372 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:10 /usr/share/ca-certificates/11358722.pem
	I0717 21:25:58.756453 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11358722.pem
	I0717 21:25:58.764599 1199225 command_runner.go:130] > 3ec20f2e
	I0717 21:25:58.765097 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11358722.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 21:25:58.777012 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:25:58.788521 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:25:58.793104 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:25:58.793135 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:25:58.793283 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:25:58.801984 1199225 command_runner.go:130] > b5213941
	I0717 21:25:58.802063 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:25:58.813888 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135872.pem && ln -fs /usr/share/ca-certificates/1135872.pem /etc/ssl/certs/1135872.pem"
	I0717 21:25:58.825947 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135872.pem
	I0717 21:25:58.830697 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:10 /usr/share/ca-certificates/1135872.pem
	I0717 21:25:58.830763 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:10 /usr/share/ca-certificates/1135872.pem
	I0717 21:25:58.830826 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135872.pem
	I0717 21:25:58.839582 1199225 command_runner.go:130] > 51391683
	I0717 21:25:58.839962 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135872.pem /etc/ssl/certs/51391683.0"
	I0717 21:25:58.851687 1199225 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:25:58.857103 1199225 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:25:58.857193 1199225 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:25:58.857250 1199225 kubeadm.go:404] StartCluster: {Name:multinode-810165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:25:58.857362 1199225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:25:58.857455 1199225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:25:58.901282 1199225 cri.go:89] found id: ""
	I0717 21:25:58.901354 1199225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:25:58.912370 1199225 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0717 21:25:58.912399 1199225 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0717 21:25:58.912409 1199225 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0717 21:25:58.912484 1199225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:25:58.925670 1199225 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 21:25:58.925772 1199225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:25:58.936352 1199225 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 21:25:58.936374 1199225 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 21:25:58.936383 1199225 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 21:25:58.936391 1199225 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:25:58.936415 1199225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:25:58.936452 1199225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 21:25:58.990424 1199225 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:25:58.990455 1199225 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0717 21:25:58.990809 1199225 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:25:58.990827 1199225 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 21:25:59.041518 1199225 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:25:59.041545 1199225 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:25:59.041598 1199225 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 21:25:59.041608 1199225 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-aws
	I0717 21:25:59.041639 1199225 kubeadm.go:322] OS: Linux
	I0717 21:25:59.041649 1199225 command_runner.go:130] > OS: Linux
	I0717 21:25:59.041691 1199225 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 21:25:59.041700 1199225 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 21:25:59.041745 1199225 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 21:25:59.041752 1199225 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 21:25:59.041796 1199225 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 21:25:59.041804 1199225 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 21:25:59.041849 1199225 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 21:25:59.041858 1199225 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 21:25:59.041903 1199225 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 21:25:59.041911 1199225 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 21:25:59.041956 1199225 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 21:25:59.041964 1199225 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 21:25:59.042007 1199225 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 21:25:59.042014 1199225 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 21:25:59.042059 1199225 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 21:25:59.042067 1199225 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 21:25:59.042110 1199225 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 21:25:59.042121 1199225 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 21:25:59.124546 1199225 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:25:59.124612 1199225 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:25:59.124767 1199225 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:25:59.124792 1199225 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:25:59.124929 1199225 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:25:59.124954 1199225 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:25:59.388797 1199225 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:25:59.391659 1199225 out.go:204]   - Generating certificates and keys ...
	I0717 21:25:59.389043 1199225 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:25:59.391868 1199225 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:25:59.391911 1199225 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 21:25:59.392015 1199225 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:25:59.392043 1199225 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 21:25:59.986760 1199225 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:25:59.986790 1199225 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:26:00.654904 1199225 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:26:00.654930 1199225 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:26:01.183397 1199225 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:26:01.183428 1199225 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0717 21:26:01.613865 1199225 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:26:01.613889 1199225 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0717 21:26:02.510835 1199225 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:26:02.510864 1199225 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0717 21:26:02.511013 1199225 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-810165] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 21:26:02.511025 1199225 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-810165] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 21:26:02.650444 1199225 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:26:02.650475 1199225 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0717 21:26:02.651005 1199225 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-810165] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 21:26:02.651027 1199225 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-810165] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 21:26:03.536935 1199225 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:26:03.536961 1199225 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:26:03.967009 1199225 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:26:03.967034 1199225 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:26:04.490307 1199225 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:26:04.490609 1199225 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0717 21:26:04.491036 1199225 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:26:04.491056 1199225 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:26:04.929971 1199225 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:26:04.929997 1199225 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:26:05.088692 1199225 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:26:05.088721 1199225 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:26:05.262838 1199225 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:26:05.262863 1199225 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:26:05.916660 1199225 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:26:05.916685 1199225 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:26:05.928770 1199225 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:26:05.928794 1199225 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:26:05.929969 1199225 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:26:05.929988 1199225 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:26:05.930274 1199225 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:26:05.930288 1199225 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 21:26:06.038258 1199225 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:26:06.040794 1199225 out.go:204]   - Booting up control plane ...
	I0717 21:26:06.038404 1199225 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:26:06.040912 1199225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:26:06.040926 1199225 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:26:06.041920 1199225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:26:06.041940 1199225 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:26:06.043077 1199225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:26:06.043095 1199225 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:26:06.043900 1199225 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:26:06.043919 1199225 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:26:06.046475 1199225 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:26:06.046498 1199225 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:26:14.051214 1199225 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003145 seconds
	I0717 21:26:14.051239 1199225 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003145 seconds
	I0717 21:26:14.051340 1199225 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:26:14.051345 1199225 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:26:14.067589 1199225 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:26:14.067617 1199225 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:26:14.594627 1199225 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:26:14.594652 1199225 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:26:14.594825 1199225 kubeadm.go:322] [mark-control-plane] Marking the node multinode-810165 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:26:14.594831 1199225 command_runner.go:130] > [mark-control-plane] Marking the node multinode-810165 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:26:15.127829 1199225 kubeadm.go:322] [bootstrap-token] Using token: p0ew34.nt0ks9b55499h13p
	I0717 21:26:15.129494 1199225 out.go:204]   - Configuring RBAC rules ...
	I0717 21:26:15.127947 1199225 command_runner.go:130] > [bootstrap-token] Using token: p0ew34.nt0ks9b55499h13p
	I0717 21:26:15.129625 1199225 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:26:15.129638 1199225 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:26:15.136952 1199225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:26:15.136976 1199225 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:26:15.148359 1199225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:26:15.148389 1199225 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:26:15.154504 1199225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:26:15.154534 1199225 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:26:15.160157 1199225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:26:15.160187 1199225 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:26:15.165706 1199225 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:26:15.165730 1199225 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:26:15.184149 1199225 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:26:15.184222 1199225 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:26:15.395575 1199225 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:26:15.395598 1199225 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 21:26:15.544352 1199225 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:26:15.544374 1199225 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 21:26:15.545523 1199225 kubeadm.go:322] 
	I0717 21:26:15.545595 1199225 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:26:15.545605 1199225 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0717 21:26:15.545609 1199225 kubeadm.go:322] 
	I0717 21:26:15.545682 1199225 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:26:15.545687 1199225 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0717 21:26:15.545691 1199225 kubeadm.go:322] 
	I0717 21:26:15.545715 1199225 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:26:15.545720 1199225 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0717 21:26:15.545774 1199225 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:26:15.545779 1199225 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:26:15.545826 1199225 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:26:15.545830 1199225 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:26:15.545838 1199225 kubeadm.go:322] 
	I0717 21:26:15.545889 1199225 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:26:15.545894 1199225 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0717 21:26:15.545897 1199225 kubeadm.go:322] 
	I0717 21:26:15.545943 1199225 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:26:15.545947 1199225 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:26:15.545951 1199225 kubeadm.go:322] 
	I0717 21:26:15.546000 1199225 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:26:15.546005 1199225 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0717 21:26:15.546077 1199225 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:26:15.546081 1199225 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:26:15.546146 1199225 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:26:15.546150 1199225 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:26:15.546154 1199225 kubeadm.go:322] 
	I0717 21:26:15.546233 1199225 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:26:15.546238 1199225 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:26:15.546316 1199225 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:26:15.546335 1199225 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0717 21:26:15.546338 1199225 kubeadm.go:322] 
	I0717 21:26:15.546418 1199225 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p0ew34.nt0ks9b55499h13p \
	I0717 21:26:15.546422 1199225 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token p0ew34.nt0ks9b55499h13p \
	I0717 21:26:15.546519 1199225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa \
	I0717 21:26:15.546524 1199225 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa \
	I0717 21:26:15.546543 1199225 kubeadm.go:322] 	--control-plane 
	I0717 21:26:15.546547 1199225 command_runner.go:130] > 	--control-plane 
	I0717 21:26:15.546550 1199225 kubeadm.go:322] 
	I0717 21:26:15.546630 1199225 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:26:15.546634 1199225 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:26:15.546644 1199225 kubeadm.go:322] 
	I0717 21:26:15.549189 1199225 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p0ew34.nt0ks9b55499h13p \
	I0717 21:26:15.549208 1199225 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p0ew34.nt0ks9b55499h13p \
	I0717 21:26:15.549305 1199225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa 
	I0717 21:26:15.549316 1199225 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa 
	I0717 21:26:15.551003 1199225 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 21:26:15.551060 1199225 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 21:26:15.551279 1199225 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:26:15.551306 1199225 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:26:15.551343 1199225 cni.go:84] Creating CNI manager for ""
	I0717 21:26:15.551358 1199225 cni.go:137] 1 nodes found, recommending kindnet
	I0717 21:26:15.553308 1199225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 21:26:15.554963 1199225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:26:15.562708 1199225 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 21:26:15.562733 1199225 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0717 21:26:15.562741 1199225 command_runner.go:130] > Device: 3ah/58d	Inode: 5193619     Links: 1
	I0717 21:26:15.562749 1199225 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:26:15.562756 1199225 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0717 21:26:15.562762 1199225 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0717 21:26:15.562774 1199225 command_runner.go:130] > Change: 2023-07-17 21:03:29.560782622 +0000
	I0717 21:26:15.562784 1199225 command_runner.go:130] >  Birth: 2023-07-17 21:03:29.520782656 +0000
	I0717 21:26:15.565289 1199225 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 21:26:15.565313 1199225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:26:15.635645 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:26:16.547823 1199225 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0717 21:26:16.547844 1199225 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0717 21:26:16.547850 1199225 command_runner.go:130] > serviceaccount/kindnet created
	I0717 21:26:16.547856 1199225 command_runner.go:130] > daemonset.apps/kindnet created
	I0717 21:26:16.547879 1199225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:26:16.548017 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:16.548102 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=multinode-810165 minikube.k8s.io/updated_at=2023_07_17T21_26_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:16.562977 1199225 command_runner.go:130] > -16
	I0717 21:26:16.563017 1199225 ops.go:34] apiserver oom_adj: -16
	I0717 21:26:16.671177 1199225 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0717 21:26:16.675133 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:16.735550 1199225 command_runner.go:130] > node/multinode-810165 labeled
	I0717 21:26:16.819749 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:17.320364 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:17.411459 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:17.820984 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:17.907908 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:18.320450 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:18.411630 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:18.820854 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:18.912458 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:19.320776 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:19.408384 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:19.820760 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:19.920555 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:20.320114 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:20.415822 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:20.820417 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:20.916650 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:21.319981 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:21.411104 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:21.820811 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:21.910391 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:22.320035 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:22.415250 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:22.820897 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:22.917012 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:23.320183 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:23.410628 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:23.820263 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:23.909953 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:24.320013 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:24.416573 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:24.820001 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:24.915359 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:25.319992 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:25.417890 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:25.820134 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:25.943293 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:26.320027 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:26.412009 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:26.820836 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:26.919823 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:27.320132 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:27.434809 1199225 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 21:26:27.820672 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:26:27.923584 1199225 command_runner.go:130] > NAME      SECRETS   AGE
	I0717 21:26:27.923601 1199225 command_runner.go:130] > default   0         0s
	I0717 21:26:27.927255 1199225 kubeadm.go:1081] duration metric: took 11.379293988s to wait for elevateKubeSystemPrivileges.
	I0717 21:26:27.927280 1199225 kubeadm.go:406] StartCluster complete in 29.070037935s
	I0717 21:26:27.927296 1199225 settings.go:142] acquiring lock: {Name:mkf49a04ad0833d4cf5e309fbf4dcc2866032ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:26:27.927356 1199225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:26:27.928129 1199225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-1130480/kubeconfig: {Name:mkeb40f750a7362e9193faee51ea6ae2e33e893d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:26:27.928627 1199225 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:26:27.928899 1199225 kapi.go:59] client config for multinode-810165: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:26:27.929363 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:26:27.929800 1199225 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:26:27.930036 1199225 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 21:26:27.930102 1199225 addons.go:69] Setting storage-provisioner=true in profile "multinode-810165"
	I0717 21:26:27.930117 1199225 addons.go:231] Setting addon storage-provisioner=true in "multinode-810165"
	I0717 21:26:27.930175 1199225 host.go:66] Checking if "multinode-810165" exists ...
	I0717 21:26:27.930816 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:26:27.931291 1199225 addons.go:69] Setting default-storageclass=true in profile "multinode-810165"
	I0717 21:26:27.931321 1199225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-810165"
	I0717 21:26:27.931595 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:26:27.932095 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 21:26:27.932136 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:27.932159 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:27.932182 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:27.932430 1199225 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 21:26:27.974861 1199225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:26:27.979316 1199225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:26:27.979336 1199225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:26:27.979400 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:26:28.015002 1199225 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:26:28.015272 1199225 kapi.go:59] client config for multinode-810165: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:26:28.016371 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 21:26:28.016395 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:28.016406 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:28.016413 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:28.029305 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:26:28.085254 1199225 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I0717 21:26:28.085282 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:28.085292 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:28 GMT
	I0717 21:26:28.085298 1199225 round_trippers.go:580]     Audit-Id: 25ed73a5-aa5d-4b8e-985c-b02fc934aafd
	I0717 21:26:28.085305 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:28.085312 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:28.085319 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:28.085328 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:28.085339 1199225 round_trippers.go:580]     Content-Length: 109
	I0717 21:26:28.087061 1199225 round_trippers.go:574] Response Status: 200 OK in 154 milliseconds
	I0717 21:26:28.087085 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:28.087096 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:28.087103 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:28.087113 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:28.087121 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:28.087130 1199225 round_trippers.go:580]     Content-Length: 291
	I0717 21:26:28.087144 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:28 GMT
	I0717 21:26:28.087151 1199225 round_trippers.go:580]     Audit-Id: 58259d3f-e35e-4899-8310-f1a96dd2569f
	I0717 21:26:28.087554 1199225 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"357"},"items":[]}
	I0717 21:26:28.087923 1199225 addons.go:231] Setting addon default-storageclass=true in "multinode-810165"
	I0717 21:26:28.087964 1199225 host.go:66] Checking if "multinode-810165" exists ...
	I0717 21:26:28.088403 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:26:28.096187 1199225 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"079abaa7-e8db-4785-a68a-7cea17b9f8f9","resourceVersion":"353","creationTimestamp":"2023-07-17T21:26:15Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 21:26:28.096584 1199225 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"079abaa7-e8db-4785-a68a-7cea17b9f8f9","resourceVersion":"353","creationTimestamp":"2023-07-17T21:26:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 21:26:28.096639 1199225 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 21:26:28.096652 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:28.096662 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:28.096673 1199225 round_trippers.go:473]     Content-Type: application/json
	I0717 21:26:28.096681 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:28.123000 1199225 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:26:28.123020 1199225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:26:28.123084 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:26:28.143511 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:26:28.164739 1199225 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I0717 21:26:28.164761 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:28.164770 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:28.164778 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:28.164785 1199225 round_trippers.go:580]     Content-Length: 291
	I0717 21:26:28.164791 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:28 GMT
	I0717 21:26:28.164798 1199225 round_trippers.go:580]     Audit-Id: a9035707-d7c5-4080-a85e-0948f578d225
	I0717 21:26:28.164804 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:28.164811 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:28.171717 1199225 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"079abaa7-e8db-4785-a68a-7cea17b9f8f9","resourceVersion":"363","creationTimestamp":"2023-07-17T21:26:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 21:26:28.185305 1199225 command_runner.go:130] > apiVersion: v1
	I0717 21:26:28.185369 1199225 command_runner.go:130] > data:
	I0717 21:26:28.185388 1199225 command_runner.go:130] >   Corefile: |
	I0717 21:26:28.185407 1199225 command_runner.go:130] >     .:53 {
	I0717 21:26:28.185444 1199225 command_runner.go:130] >         errors
	I0717 21:26:28.185466 1199225 command_runner.go:130] >         health {
	I0717 21:26:28.185486 1199225 command_runner.go:130] >            lameduck 5s
	I0717 21:26:28.185505 1199225 command_runner.go:130] >         }
	I0717 21:26:28.185525 1199225 command_runner.go:130] >         ready
	I0717 21:26:28.185558 1199225 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 21:26:28.185585 1199225 command_runner.go:130] >            pods insecure
	I0717 21:26:28.185607 1199225 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 21:26:28.185628 1199225 command_runner.go:130] >            ttl 30
	I0717 21:26:28.185647 1199225 command_runner.go:130] >         }
	I0717 21:26:28.185678 1199225 command_runner.go:130] >         prometheus :9153
	I0717 21:26:28.185697 1199225 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 21:26:28.185717 1199225 command_runner.go:130] >            max_concurrent 1000
	I0717 21:26:28.185735 1199225 command_runner.go:130] >         }
	I0717 21:26:28.185763 1199225 command_runner.go:130] >         cache 30
	I0717 21:26:28.185788 1199225 command_runner.go:130] >         loop
	I0717 21:26:28.185807 1199225 command_runner.go:130] >         reload
	I0717 21:26:28.185826 1199225 command_runner.go:130] >         loadbalance
	I0717 21:26:28.185844 1199225 command_runner.go:130] >     }
	I0717 21:26:28.185875 1199225 command_runner.go:130] > kind: ConfigMap
	I0717 21:26:28.185901 1199225 command_runner.go:130] > metadata:
	I0717 21:26:28.185926 1199225 command_runner.go:130] >   creationTimestamp: "2023-07-17T21:26:15Z"
	I0717 21:26:28.185946 1199225 command_runner.go:130] >   name: coredns
	I0717 21:26:28.185976 1199225 command_runner.go:130] >   namespace: kube-system
	I0717 21:26:28.185996 1199225 command_runner.go:130] >   resourceVersion: "268"
	I0717 21:26:28.186016 1199225 command_runner.go:130] >   uid: 3be92e9c-94b2-48cd-becb-21e27a88abd3
	I0717 21:26:28.189565 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:26:28.209034 1199225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:26:28.358969 1199225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:26:28.672411 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 21:26:28.672433 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:28.672442 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:28.672449 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:28.714472 1199225 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I0717 21:26:28.714495 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:28.714503 1199225 round_trippers.go:580]     Content-Length: 291
	I0717 21:26:28.714510 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:28 GMT
	I0717 21:26:28.714517 1199225 round_trippers.go:580]     Audit-Id: 178f47b5-16a5-458f-baa2-7e8f80d8f899
	I0717 21:26:28.714526 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:28.714532 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:28.714540 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:28.714546 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:28.714570 1199225 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"079abaa7-e8db-4785-a68a-7cea17b9f8f9","resourceVersion":"378","creationTimestamp":"2023-07-17T21:26:15Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 21:26:28.714658 1199225 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-810165" context rescaled to 1 replicas
	I0717 21:26:28.714681 1199225 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:26:28.716705 1199225 out.go:177] * Verifying Kubernetes components...
	I0717 21:26:28.718791 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:26:28.890021 1199225 command_runner.go:130] > configmap/coredns replaced
	I0717 21:26:28.895272 1199225 start.go:917] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0717 21:26:29.093483 1199225 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0717 21:26:29.100106 1199225 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0717 21:26:29.110817 1199225 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 21:26:29.119637 1199225 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 21:26:29.129760 1199225 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0717 21:26:29.143172 1199225 command_runner.go:130] > pod/storage-provisioner created
	I0717 21:26:29.149502 1199225 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0717 21:26:29.151489 1199225 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 21:26:29.150228 1199225 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:26:29.154014 1199225 addons.go:502] enable addons completed in 1.223972591s: enabled=[storage-provisioner default-storageclass]
	I0717 21:26:29.154389 1199225 kapi.go:59] client config for multinode-810165: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:26:29.154726 1199225 node_ready.go:35] waiting up to 6m0s for node "multinode-810165" to be "Ready" ...
	I0717 21:26:29.154866 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:29.154891 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:29.154912 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:29.154933 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:29.157552 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:29.157610 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:29.157632 1199225 round_trippers.go:580]     Audit-Id: d705c582-b656-4224-94a3-ae967b5d00e7
	I0717 21:26:29.157656 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:29.157681 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:29.157691 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:29.157717 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:29.157724 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:29 GMT
	I0717 21:26:29.157945 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:29.659532 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:29.659600 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:29.659624 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:29.659646 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:29.662060 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:29.662087 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:29.662096 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:29.662104 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:29.662110 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:29.662117 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:29.662127 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:29 GMT
	I0717 21:26:29.662140 1199225 round_trippers.go:580]     Audit-Id: 07cb7f58-1978-452c-b39e-bf2fa1fa9c50
	I0717 21:26:29.662665 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:30.158775 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:30.158856 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:30.158886 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:30.158956 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:30.162950 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:30.163035 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:30.163059 1199225 round_trippers.go:580]     Audit-Id: 0dd516f6-aec9-4e02-ba29-069fd30be349
	I0717 21:26:30.163083 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:30.163117 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:30.163144 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:30.163168 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:30.163205 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:30 GMT
	I0717 21:26:30.163805 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:30.659026 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:30.659048 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:30.659060 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:30.659067 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:30.665956 1199225 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 21:26:30.666027 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:30.666041 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:30.666048 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:30.666059 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:30.666067 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:30 GMT
	I0717 21:26:30.666077 1199225 round_trippers.go:580]     Audit-Id: ec85a880-8049-4b63-9f39-b01716a27dca
	I0717 21:26:30.666086 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:30.666313 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:31.159429 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:31.159453 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:31.159463 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:31.159471 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:31.162110 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:31.162135 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:31.162144 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:31.162151 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:31.162157 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:31.162164 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:31 GMT
	I0717 21:26:31.162171 1199225 round_trippers.go:580]     Audit-Id: c4c24018-85c9-4f60-8fc5-642dbc4e31ff
	I0717 21:26:31.162178 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:31.162360 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:31.162761 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:31.658979 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:31.659004 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:31.659016 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:31.659024 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:31.661821 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:31.661886 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:31.661908 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:31 GMT
	I0717 21:26:31.661933 1199225 round_trippers.go:580]     Audit-Id: 252af7ec-b0e3-4c47-bad1-90df988d62df
	I0717 21:26:31.661973 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:31.662001 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:31.662023 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:31.662046 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:31.662279 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:32.158856 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:32.158878 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:32.158888 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:32.158896 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:32.161548 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:32.161569 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:32.161579 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:32 GMT
	I0717 21:26:32.161586 1199225 round_trippers.go:580]     Audit-Id: a6038054-372c-4e69-9bf3-1f74e84cde9c
	I0717 21:26:32.161593 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:32.161599 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:32.161606 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:32.161612 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:32.161712 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:32.658937 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:32.658959 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:32.658969 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:32.658977 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:32.661699 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:32.661728 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:32.661737 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:32 GMT
	I0717 21:26:32.661744 1199225 round_trippers.go:580]     Audit-Id: b2f3adb6-5066-49b8-bd26-0b80f2645152
	I0717 21:26:32.661752 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:32.661759 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:32.661766 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:32.661772 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:32.661936 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:33.159349 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:33.159377 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:33.159387 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:33.159396 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:33.162199 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:33.162225 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:33.162234 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:33.162242 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:33.162248 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:33.162255 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:33 GMT
	I0717 21:26:33.162262 1199225 round_trippers.go:580]     Audit-Id: 73df0c18-aeea-45fe-a416-a9db9680e081
	I0717 21:26:33.162270 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:33.162407 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:33.162825 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:33.659589 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:33.659614 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:33.659623 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:33.659631 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:33.662892 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:33.662917 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:33.662926 1199225 round_trippers.go:580]     Audit-Id: 516d707c-358e-4756-971f-9ca46bb1feae
	I0717 21:26:33.662933 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:33.662940 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:33.662947 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:33.662954 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:33.662969 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:33 GMT
	I0717 21:26:33.663348 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:34.158876 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:34.158901 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:34.158911 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:34.158919 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:34.161586 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:34.161613 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:34.161623 1199225 round_trippers.go:580]     Audit-Id: b1a94c4b-f27e-4a15-8804-0483449746ab
	I0717 21:26:34.161630 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:34.161637 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:34.161644 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:34.161651 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:34.161662 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:34 GMT
	I0717 21:26:34.161923 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:34.658888 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:34.658916 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:34.658928 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:34.658936 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:34.661689 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:34.661712 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:34.661720 1199225 round_trippers.go:580]     Audit-Id: b1af998a-5732-4913-a518-033dbe535681
	I0717 21:26:34.661727 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:34.661734 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:34.661740 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:34.661747 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:34.661754 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:34 GMT
	I0717 21:26:34.661895 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:35.159549 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:35.159573 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:35.159583 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:35.159591 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:35.162357 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:35.162386 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:35.162396 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:35.162403 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:35 GMT
	I0717 21:26:35.162410 1199225 round_trippers.go:580]     Audit-Id: bda742fe-e20a-4ef0-bfba-1b4134010ae1
	I0717 21:26:35.162417 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:35.162424 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:35.162430 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:35.162533 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:35.162993 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:35.658889 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:35.658914 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:35.658924 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:35.658932 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:35.665189 1199225 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 21:26:35.665215 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:35.665228 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:35 GMT
	I0717 21:26:35.665235 1199225 round_trippers.go:580]     Audit-Id: 2b9e5b7f-a17b-4c6a-8b66-a93c5e2b54fa
	I0717 21:26:35.665242 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:35.665249 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:35.665256 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:35.665263 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:35.665613 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:36.158851 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:36.158875 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:36.158885 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:36.158892 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:36.161614 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:36.161640 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:36.161649 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:36.161656 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:36.161663 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:36.161670 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:36 GMT
	I0717 21:26:36.161678 1199225 round_trippers.go:580]     Audit-Id: 24403a39-556f-4b88-a23e-56a684d5cbdd
	I0717 21:26:36.161685 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:36.161775 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:36.659625 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:36.659649 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:36.659660 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:36.659668 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:36.662254 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:36.662279 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:36.662289 1199225 round_trippers.go:580]     Audit-Id: 06471a1b-db2c-4e6a-b4a2-87047d965df7
	I0717 21:26:36.662296 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:36.662302 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:36.662309 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:36.662315 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:36.662327 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:36 GMT
	I0717 21:26:36.662476 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:37.159641 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:37.159664 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:37.159675 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:37.159683 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:37.162253 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:37.162279 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:37.162288 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:37.162295 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:37.162302 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:37.162310 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:37.162320 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:37 GMT
	I0717 21:26:37.162330 1199225 round_trippers.go:580]     Audit-Id: 8c0c05c8-3baf-468a-8c12-abc96f0150cb
	I0717 21:26:37.162449 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:37.659631 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:37.659654 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:37.659664 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:37.659672 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:37.662392 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:37.662419 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:37.662429 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:37.662436 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:37.662443 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:37.662450 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:37.662457 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:37 GMT
	I0717 21:26:37.662467 1199225 round_trippers.go:580]     Audit-Id: da438d06-aea7-4001-9813-64341195e93e
	I0717 21:26:37.662632 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:37.663028 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:38.159307 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:38.159331 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:38.159341 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:38.159348 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:38.161890 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:38.161911 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:38.161921 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:38.161928 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:38.161934 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:38.161941 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:38.161948 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:38 GMT
	I0717 21:26:38.161955 1199225 round_trippers.go:580]     Audit-Id: c94e3dfe-bb80-4257-9446-6528fe1dc679
	I0717 21:26:38.162034 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:38.659116 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:38.659141 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:38.659152 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:38.659159 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:38.661697 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:38.661725 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:38.661734 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:38.661741 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:38.661748 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:38.661755 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:38.661761 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:38 GMT
	I0717 21:26:38.661770 1199225 round_trippers.go:580]     Audit-Id: 83fe6e4b-fd6e-4cd5-a3f2-fc1ed2187cfb
	I0717 21:26:38.661883 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:39.158828 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:39.158855 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:39.158865 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:39.158873 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:39.161710 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:39.161739 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:39.161749 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:39 GMT
	I0717 21:26:39.161756 1199225 round_trippers.go:580]     Audit-Id: a8c21774-f2d0-4540-8e1f-6ab678401822
	I0717 21:26:39.161763 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:39.161769 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:39.161776 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:39.161783 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:39.161880 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:39.659606 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:39.659631 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:39.659642 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:39.659650 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:39.662177 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:39.662205 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:39.662214 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:39.662222 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:39 GMT
	I0717 21:26:39.662229 1199225 round_trippers.go:580]     Audit-Id: 7df70613-afc4-4bff-92af-1e62b8cf1f08
	I0717 21:26:39.662235 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:39.662242 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:39.662248 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:39.662362 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:40.159481 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:40.159506 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:40.159517 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:40.159525 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:40.162151 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:40.162174 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:40.162183 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:40.162189 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:40.162196 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:40.162203 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:40.162210 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:40 GMT
	I0717 21:26:40.162217 1199225 round_trippers.go:580]     Audit-Id: 07747368-ad18-4d6e-91a2-d136d6520293
	I0717 21:26:40.162317 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:40.162729 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:40.659534 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:40.659558 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:40.659568 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:40.659577 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:40.662295 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:40.662322 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:40.662331 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:40.662340 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:40 GMT
	I0717 21:26:40.662347 1199225 round_trippers.go:580]     Audit-Id: 0722bb14-ed78-4c95-815b-f7ed8357d526
	I0717 21:26:40.662353 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:40.662360 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:40.662371 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:40.662512 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:41.159744 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:41.159771 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:41.159781 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:41.159789 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:41.162820 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:41.162846 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:41.162856 1199225 round_trippers.go:580]     Audit-Id: 0d7a2168-4266-4479-ab7b-4d16d7e6cf3f
	I0717 21:26:41.162863 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:41.162870 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:41.162876 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:41.162883 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:41.162894 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:41 GMT
	I0717 21:26:41.163023 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:41.659158 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:41.659182 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:41.659192 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:41.659200 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:41.661790 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:41.661814 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:41.661823 1199225 round_trippers.go:580]     Audit-Id: 745f3123-1a55-499d-8e7a-c32627eb9a5e
	I0717 21:26:41.661830 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:41.661837 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:41.661844 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:41.661852 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:41.661859 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:41 GMT
	I0717 21:26:41.662126 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:42.159059 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:42.159084 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:42.159095 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:42.159103 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:42.162213 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:42.162238 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:42.162248 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:42.162256 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:42 GMT
	I0717 21:26:42.162264 1199225 round_trippers.go:580]     Audit-Id: 68f78b7b-653e-43e0-843f-7f295156b5ba
	I0717 21:26:42.162271 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:42.162278 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:42.162285 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:42.162425 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:42.162878 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:42.659679 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:42.659702 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:42.659712 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:42.659719 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:42.662130 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:42.662159 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:42.662168 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:42 GMT
	I0717 21:26:42.662176 1199225 round_trippers.go:580]     Audit-Id: c6193b47-ecf7-4103-b5b9-aba8abc86e2e
	I0717 21:26:42.662182 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:42.662190 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:42.662199 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:42.662211 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:42.662497 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:43.159472 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:43.159494 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:43.159506 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:43.159514 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:43.161966 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:43.161990 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:43.161999 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:43.162005 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:43.162012 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:43.162019 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:43.162028 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:43 GMT
	I0717 21:26:43.162034 1199225 round_trippers.go:580]     Audit-Id: aa9e69c9-8fd1-4de0-a67e-dcb27fe5c5c1
	I0717 21:26:43.162132 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:43.659787 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:43.659807 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:43.659817 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:43.659824 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:43.662559 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:43.662586 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:43.662596 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:43.662603 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:43.662609 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:43.662616 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:43.662623 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:43 GMT
	I0717 21:26:43.662633 1199225 round_trippers.go:580]     Audit-Id: 1d6a987d-4ee8-4fd2-ac1d-3987bcbb2266
	I0717 21:26:43.662758 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:44.158841 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:44.158865 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:44.158875 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:44.158883 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:44.161479 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:44.161508 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:44.161517 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:44.161525 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:44.161532 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:44 GMT
	I0717 21:26:44.161538 1199225 round_trippers.go:580]     Audit-Id: c741811f-7946-4c31-9e05-d9c839ae7b29
	I0717 21:26:44.161545 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:44.161552 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:44.161640 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:44.658749 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:44.658776 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:44.658787 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:44.658795 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:44.661354 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:44.661382 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:44.661391 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:44.661398 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:44.661405 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:44 GMT
	I0717 21:26:44.661412 1199225 round_trippers.go:580]     Audit-Id: 4a67529e-7517-44b8-bc99-461aa776f23a
	I0717 21:26:44.661419 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:44.661425 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:44.661575 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:44.662013 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:45.158898 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:45.158927 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:45.158938 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:45.158946 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:45.162594 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:45.162620 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:45.162630 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:45.162639 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:45.162646 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:45 GMT
	I0717 21:26:45.162654 1199225 round_trippers.go:580]     Audit-Id: 2e555da0-8df0-4050-9d1a-105cd173af52
	I0717 21:26:45.162661 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:45.162668 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:45.162912 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:45.659332 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:45.659359 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:45.659369 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:45.659377 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:45.661860 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:45.661889 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:45.661898 1199225 round_trippers.go:580]     Audit-Id: ff2b8be9-e349-4ca6-8374-88a2024d6e27
	I0717 21:26:45.661905 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:45.661912 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:45.661919 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:45.661925 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:45.661937 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:45 GMT
	I0717 21:26:45.662076 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:46.159154 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:46.159184 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:46.159198 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:46.159211 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:46.162127 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:46.162151 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:46.162162 1199225 round_trippers.go:580]     Audit-Id: 2ba00483-94c0-4eaf-bb1f-f11d35fd7cbc
	I0717 21:26:46.162169 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:46.162176 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:46.162183 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:46.162190 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:46.162199 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:46 GMT
	I0717 21:26:46.162333 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:46.659565 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:46.659592 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:46.659603 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:46.659611 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:46.662186 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:46.662221 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:46.662232 1199225 round_trippers.go:580]     Audit-Id: 9db83423-f15a-4b72-a721-baa1d12914c9
	I0717 21:26:46.662239 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:46.662246 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:46.662253 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:46.662266 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:46.662275 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:46 GMT
	I0717 21:26:46.662392 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:46.662796 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:47.159559 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:47.159582 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:47.159593 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:47.159600 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:47.161987 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:47.162008 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:47.162017 1199225 round_trippers.go:580]     Audit-Id: 32f3a866-4954-4485-8fb2-5cc49d3f49ea
	I0717 21:26:47.162025 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:47.162031 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:47.162038 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:47.162045 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:47.162056 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:47 GMT
	I0717 21:26:47.162348 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:47.659445 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:47.659468 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:47.659478 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:47.659486 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:47.661999 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:47.662075 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:47.662098 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:47.662110 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:47.662117 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:47.662141 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:47.662151 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:47 GMT
	I0717 21:26:47.662165 1199225 round_trippers.go:580]     Audit-Id: 7a3da927-adfa-42e6-9716-a907ca355ee8
	I0717 21:26:47.662357 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:48.159219 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:48.159243 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:48.159253 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:48.159262 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:48.161833 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:48.161862 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:48.161880 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:48 GMT
	I0717 21:26:48.161888 1199225 round_trippers.go:580]     Audit-Id: 8e25fe04-78c1-4706-990e-9d4bd9fc64b3
	I0717 21:26:48.161895 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:48.161902 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:48.161912 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:48.161922 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:48.162274 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:48.658847 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:48.658870 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:48.658881 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:48.658888 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:48.661527 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:48.661551 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:48.661559 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:48.661566 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:48.661573 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:48.661581 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:48.661588 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:48 GMT
	I0717 21:26:48.661594 1199225 round_trippers.go:580]     Audit-Id: b56c6324-54d4-4ba8-acf8-641e3fdff29c
	I0717 21:26:48.661714 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:49.159256 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:49.159281 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:49.159291 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:49.159299 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:49.161990 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:49.162017 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:49.162027 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:49.162034 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:49.162041 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:49.162047 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:49.162055 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:49 GMT
	I0717 21:26:49.162064 1199225 round_trippers.go:580]     Audit-Id: 49672674-6838-4c04-9830-0e3b8d7e7e40
	I0717 21:26:49.162163 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:49.162560 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:49.658819 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:49.658844 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:49.658854 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:49.658862 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:49.661515 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:49.661574 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:49.661595 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:49.661619 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:49.661646 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:49.661656 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:49.661663 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:49 GMT
	I0717 21:26:49.661669 1199225 round_trippers.go:580]     Audit-Id: 9e90bb1b-4c17-48af-809b-c3e09ca43fd4
	I0717 21:26:49.661794 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:50.158846 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:50.158871 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:50.158881 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:50.158889 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:50.161669 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:50.161702 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:50.161712 1199225 round_trippers.go:580]     Audit-Id: 6b9b9c3f-e560-4c3c-b981-d96cb90a83a1
	I0717 21:26:50.161719 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:50.161727 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:50.161734 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:50.161741 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:50.161750 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:50 GMT
	I0717 21:26:50.162056 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:50.659708 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:50.659730 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:50.659740 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:50.659747 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:50.662323 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:50.662346 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:50.662355 1199225 round_trippers.go:580]     Audit-Id: afba0a45-349a-4ec8-a7a5-46a255e64489
	I0717 21:26:50.662362 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:50.662369 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:50.662376 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:50.662385 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:50.662392 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:50 GMT
	I0717 21:26:50.662526 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:51.159713 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:51.159737 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:51.159754 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:51.159762 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:51.162376 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:51.162403 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:51.162413 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:51.162420 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:51.162427 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:51.162434 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:51 GMT
	I0717 21:26:51.162441 1199225 round_trippers.go:580]     Audit-Id: 6185e1b8-3153-439d-abc9-7390a1e295bb
	I0717 21:26:51.162451 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:51.162793 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:51.163226 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:51.659489 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:51.659513 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:51.659523 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:51.659531 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:51.662066 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:51.662091 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:51.662100 1199225 round_trippers.go:580]     Audit-Id: 7424a1e1-37f9-480c-8a25-32408f4957f8
	I0717 21:26:51.662107 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:51.662115 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:51.662122 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:51.662129 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:51.662136 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:51 GMT
	I0717 21:26:51.662288 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:52.159510 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:52.159534 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:52.159544 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:52.159552 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:52.162063 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:52.162089 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:52.162101 1199225 round_trippers.go:580]     Audit-Id: c8dcfd68-5494-46b4-b5b4-d748b5c44a4b
	I0717 21:26:52.162108 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:52.162115 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:52.162122 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:52.162129 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:52.162136 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:52 GMT
	I0717 21:26:52.162474 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:52.658860 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:52.658884 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:52.658895 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:52.658903 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:52.661484 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:52.661506 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:52.661515 1199225 round_trippers.go:580]     Audit-Id: 92a3d404-fc32-4f03-9ebd-7d9a1b4e739a
	I0717 21:26:52.661522 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:52.661529 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:52.661535 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:52.661542 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:52.661549 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:52 GMT
	I0717 21:26:52.661701 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:53.159510 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:53.159536 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:53.159546 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:53.159554 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:53.162159 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:53.162198 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:53.162209 1199225 round_trippers.go:580]     Audit-Id: 4da13e85-9433-4571-85ff-0824840d7bca
	I0717 21:26:53.162216 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:53.162223 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:53.162230 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:53.162239 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:53.162254 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:53 GMT
	I0717 21:26:53.162354 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:53.658796 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:53.658823 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:53.658833 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:53.658841 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:53.661605 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:53.661632 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:53.661641 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:53.661648 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:53.661655 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:53.661662 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:53.661669 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:53 GMT
	I0717 21:26:53.661676 1199225 round_trippers.go:580]     Audit-Id: 3dc32c98-741e-426c-802c-fd1faada31c1
	I0717 21:26:53.661809 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:53.662207 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:54.158845 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:54.158879 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:54.158891 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:54.158898 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:54.161499 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:54.161524 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:54.161532 1199225 round_trippers.go:580]     Audit-Id: e812089c-299e-4025-b624-8a5175a1feaf
	I0717 21:26:54.161539 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:54.161546 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:54.161553 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:54.161559 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:54.161566 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:54 GMT
	I0717 21:26:54.161683 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:54.658753 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:54.658777 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:54.658788 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:54.658795 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:54.661400 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:54.661432 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:54.661442 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:54.661449 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:54.661456 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:54.661463 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:54.661473 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:54 GMT
	I0717 21:26:54.661482 1199225 round_trippers.go:580]     Audit-Id: ab4c071a-ee25-4d8d-b966-2de7bdf4478b
	I0717 21:26:54.661712 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:55.159442 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:55.159468 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:55.159479 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:55.159487 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:55.162469 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:55.162521 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:55.162544 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:55.162552 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:55.162559 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:55 GMT
	I0717 21:26:55.162566 1199225 round_trippers.go:580]     Audit-Id: 9135e67d-b24e-454e-b1b0-1431d33cbc45
	I0717 21:26:55.162573 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:55.162579 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:55.162746 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:55.658833 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:55.658856 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:55.658867 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:55.658875 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:55.661591 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:55.661627 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:55.661639 1199225 round_trippers.go:580]     Audit-Id: 9c0ae5b3-3a3b-4c1e-8c6d-33d2765cc362
	I0717 21:26:55.661667 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:55.661674 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:55.661680 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:55.661688 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:55.661695 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:55 GMT
	I0717 21:26:55.661824 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:56.158905 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:56.158929 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:56.158939 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:56.158948 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:56.161547 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:56.161570 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:56.161578 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:56.161585 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:56.161593 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:56.161600 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:56 GMT
	I0717 21:26:56.161607 1199225 round_trippers.go:580]     Audit-Id: a0dbcad4-effb-455d-b137-f7a62fadddd8
	I0717 21:26:56.161614 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:56.161728 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:56.162139 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:56.658827 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:56.658850 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:56.658860 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:56.658868 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:56.661526 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:56.661550 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:56.661559 1199225 round_trippers.go:580]     Audit-Id: 3ad70f2e-3adb-434d-a581-632ab1620414
	I0717 21:26:56.661566 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:56.661572 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:56.661579 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:56.661585 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:56.661592 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:56 GMT
	I0717 21:26:56.661726 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:57.159391 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:57.159416 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:57.159426 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:57.159434 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:57.162167 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:57.162193 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:57.162203 1199225 round_trippers.go:580]     Audit-Id: 920b48da-6ae5-4882-87ae-09fa92bc7d80
	I0717 21:26:57.162210 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:57.162216 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:57.162223 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:57.162230 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:57.162241 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:57 GMT
	I0717 21:26:57.162372 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:57.659000 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:57.659024 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:57.659035 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:57.659043 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:57.661609 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:57.661635 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:57.661645 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:57.661653 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:57.661660 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:57.661670 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:57.661686 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:57 GMT
	I0717 21:26:57.661694 1199225 round_trippers.go:580]     Audit-Id: f9d7de17-0711-41ac-8fd2-4b5c3875c3fd
	I0717 21:26:57.661823 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:58.158827 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:58.158853 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:58.158863 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:58.158871 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:58.161898 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:58.161927 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:58.161941 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:58.161949 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:58.161956 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:58.161963 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:58 GMT
	I0717 21:26:58.161970 1199225 round_trippers.go:580]     Audit-Id: 0f47b77f-dc21-4025-bf35-83c4ea70f717
	I0717 21:26:58.161977 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:58.162107 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:58.162534 1199225 node_ready.go:58] node "multinode-810165" has status "Ready":"False"
	I0717 21:26:58.658849 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:58.658871 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:58.658881 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:58.658888 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:58.661637 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:58.661662 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:58.661672 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:58.661679 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:58.661686 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:58 GMT
	I0717 21:26:58.661693 1199225 round_trippers.go:580]     Audit-Id: 141d890e-21d5-4863-8753-773cb315ae8c
	I0717 21:26:58.661700 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:58.661709 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:58.662030 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:59.159694 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:59.159717 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:59.159728 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:59.159735 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:59.162229 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:59.162250 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:59.162258 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:59 GMT
	I0717 21:26:59.162265 1199225 round_trippers.go:580]     Audit-Id: 84fbb39e-59e4-4ade-8fe8-a07e062c7dd6
	I0717 21:26:59.162272 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:59.162278 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:59.162285 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:59.162291 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:59.162439 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"358","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0717 21:26:59.659559 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:59.659583 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:59.659594 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:59.659602 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:59.662160 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:59.662185 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:59.662195 1199225 round_trippers.go:580]     Audit-Id: 333b7e3c-5584-42ea-8f7e-a9d5165e5245
	I0717 21:26:59.662203 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:59.662210 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:59.662220 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:59.662227 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:59.662242 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:59 GMT
	I0717 21:26:59.662466 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:26:59.662860 1199225 node_ready.go:49] node "multinode-810165" has status "Ready":"True"
	I0717 21:26:59.662878 1199225 node_ready.go:38] duration metric: took 30.508116424s waiting for node "multinode-810165" to be "Ready" ...
	I0717 21:26:59.662888 1199225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:26:59.662959 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:26:59.662970 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:59.662978 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:59.662985 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:59.666397 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:26:59.666419 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:59.666427 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:59 GMT
	I0717 21:26:59.666434 1199225 round_trippers.go:580]     Audit-Id: 68099353-c119-4642-9620-505a6493bee5
	I0717 21:26:59.666441 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:59.666448 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:59.666454 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:59.666461 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:59.667050 1199225 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"432","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0717 21:26:59.670991 1199225 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace to be "Ready" ...
	I0717 21:26:59.671079 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-sz6sv
	I0717 21:26:59.671089 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:59.671099 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:59.671106 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:59.673779 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:59.673835 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:59.673853 1199225 round_trippers.go:580]     Audit-Id: 7b0d1db3-04e4-4199-a363-59f283ef8f25
	I0717 21:26:59.673861 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:59.673868 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:59.673875 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:59.673884 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:59.673891 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:59 GMT
	I0717 21:26:59.674063 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"432","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 21:26:59.674614 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:26:59.674630 1199225 round_trippers.go:469] Request Headers:
	I0717 21:26:59.674639 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:26:59.674646 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:26:59.676888 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:26:59.676911 1199225 round_trippers.go:577] Response Headers:
	I0717 21:26:59.676920 1199225 round_trippers.go:580]     Audit-Id: d4f5e37d-cb26-4fd3-a348-1cb852336e46
	I0717 21:26:59.676927 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:26:59.676934 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:26:59.676941 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:26:59.676951 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:26:59.676960 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:26:59 GMT
	I0717 21:26:59.677090 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:00.178370 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-sz6sv
	I0717 21:27:00.178400 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:00.178410 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:00.178418 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:00.184167 1199225 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 21:27:00.184196 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:00.184206 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:00.184213 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:00.184221 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:00 GMT
	I0717 21:27:00.184228 1199225 round_trippers.go:580]     Audit-Id: a44487f2-f5d7-48ee-92ba-da6aa8559c44
	I0717 21:27:00.184235 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:00.184242 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:00.184363 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"432","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 21:27:00.184924 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:00.184934 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:00.184943 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:00.184954 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:00.192583 1199225 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 21:27:00.192609 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:00.192622 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:00 GMT
	I0717 21:27:00.192630 1199225 round_trippers.go:580]     Audit-Id: bee51687-3241-4bc8-86ca-017f4a43a91a
	I0717 21:27:00.192636 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:00.192643 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:00.192650 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:00.192657 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:00.192778 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:00.677951 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-sz6sv
	I0717 21:27:00.677975 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:00.677985 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:00.677992 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:00.680565 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:00.680592 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:00.680601 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:00.680608 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:00.680615 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:00.680621 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:00.680628 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:00 GMT
	I0717 21:27:00.680640 1199225 round_trippers.go:580]     Audit-Id: 68fd65cf-ad9e-4e0c-95dc-ca30d490dd6a
	I0717 21:27:00.680755 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"432","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 21:27:00.681321 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:00.681332 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:00.681340 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:00.681347 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:00.683782 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:00.683803 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:00.683811 1199225 round_trippers.go:580]     Audit-Id: 117bdb75-e801-47fa-9745-95de675cf6d5
	I0717 21:27:00.683819 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:00.683833 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:00.683840 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:00.683847 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:00.683853 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:00 GMT
	I0717 21:27:00.683989 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.178030 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-sz6sv
	I0717 21:27:01.178055 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.178065 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.178075 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.180742 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.180765 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.180774 1199225 round_trippers.go:580]     Audit-Id: 367897b7-f243-4c7b-8d43-15bffffa85bf
	I0717 21:27:01.180781 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.180787 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.180794 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.180801 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.180808 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.180968 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"442","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 21:27:01.181540 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.181561 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.181572 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.181579 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.184206 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.184232 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.184240 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.184247 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.184254 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.184261 1199225 round_trippers.go:580]     Audit-Id: 89a53981-f06d-4318-88c9-85cfe53d63dc
	I0717 21:27:01.184268 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.184275 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.184420 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.184813 1199225 pod_ready.go:92] pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.184831 1199225 pod_ready.go:81] duration metric: took 1.513808979s waiting for pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.184849 1199225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.184914 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-810165
	I0717 21:27:01.184922 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.184930 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.184940 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.187373 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.187396 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.187405 1199225 round_trippers.go:580]     Audit-Id: ba28303f-85dc-4256-ad27-721369e08977
	I0717 21:27:01.187412 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.187418 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.187424 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.187431 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.187440 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.187593 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-810165","namespace":"kube-system","uid":"940b7970-5f26-401c-9994-d77008b6d302","resourceVersion":"327","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce32c73a62db7bf84590abf5273c1610","kubernetes.io/config.mirror":"ce32c73a62db7bf84590abf5273c1610","kubernetes.io/config.seen":"2023-07-17T21:26:07.702630245Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 21:27:01.188077 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.188090 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.188099 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.188108 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.190657 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.190679 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.190688 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.190694 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.190701 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.190708 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.190715 1199225 round_trippers.go:580]     Audit-Id: 5f942486-9431-4c62-b867-c12395245335
	I0717 21:27:01.190721 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.190921 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.191348 1199225 pod_ready.go:92] pod "etcd-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.191371 1199225 pod_ready.go:81] duration metric: took 6.510277ms waiting for pod "etcd-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.191387 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.191451 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-810165
	I0717 21:27:01.191461 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.191469 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.191477 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.194165 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.194193 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.194209 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.194216 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.194224 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.194238 1199225 round_trippers.go:580]     Audit-Id: 7eaf3ddc-fe1f-4147-b6a2-63f24b990fb2
	I0717 21:27:01.194245 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.194257 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.194419 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-810165","namespace":"kube-system","uid":"a7633458-ccb5-468c-83f2-49d4163e531d","resourceVersion":"307","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7ed7dd74add45e8e07e2f2a7e8e5f118","kubernetes.io/config.mirror":"7ed7dd74add45e8e07e2f2a7e8e5f118","kubernetes.io/config.seen":"2023-07-17T21:26:15.462821448Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 21:27:01.194981 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.195005 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.195013 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.195031 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.197741 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.197775 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.197784 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.197791 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.197797 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.197804 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.197811 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.197818 1199225 round_trippers.go:580]     Audit-Id: 1bc960a7-2163-4a87-974d-5f523c6a8f4b
	I0717 21:27:01.197939 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.198341 1199225 pod_ready.go:92] pod "kube-apiserver-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.198359 1199225 pod_ready.go:81] duration metric: took 6.964036ms waiting for pod "kube-apiserver-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.198373 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.198442 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-810165
	I0717 21:27:01.198452 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.198461 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.198467 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.201115 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.201141 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.201150 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.201179 1199225 round_trippers.go:580]     Audit-Id: 02bb7088-a860-46ef-ba05-f09bd47287fe
	I0717 21:27:01.201188 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.201195 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.201201 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.201208 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.201569 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-810165","namespace":"kube-system","uid":"abb5ca6b-3ac7-4f15-9507-c3b23658399d","resourceVersion":"328","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"627e33095190fdce831ec9aa3b244b71","kubernetes.io/config.mirror":"627e33095190fdce831ec9aa3b244b71","kubernetes.io/config.seen":"2023-07-17T21:26:15.462827413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 21:27:01.202144 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.202161 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.202170 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.202178 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.204774 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.204805 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.204815 1199225 round_trippers.go:580]     Audit-Id: d862d4b9-2536-4d40-9ab4-633573517bee
	I0717 21:27:01.204822 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.204829 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.204838 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.204845 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.204852 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.205096 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.205540 1199225 pod_ready.go:92] pod "kube-controller-manager-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.205562 1199225 pod_ready.go:81] duration metric: took 7.172693ms waiting for pod "kube-controller-manager-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.205575 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-244vk" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.205640 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-244vk
	I0717 21:27:01.205651 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.205659 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.205667 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.208344 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.208367 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.208376 1199225 round_trippers.go:580]     Audit-Id: 2db3ee78-f8ec-400d-98c9-963d7395578b
	I0717 21:27:01.208384 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.208391 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.208397 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.208404 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.208410 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.208591 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-244vk","generateName":"kube-proxy-","namespace":"kube-system","uid":"3af224a1-d471-4cf5-b8dc-1abb030901c5","resourceVersion":"413","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd2fb151-6110-42d2-8b60-f21076800dc8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd2fb151-6110-42d2-8b60-f21076800dc8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 21:27:01.260288 1199225 request.go:628] Waited for 51.206041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.260346 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.260352 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.260361 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.260372 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.263040 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.263073 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.263082 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.263091 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.263098 1199225 round_trippers.go:580]     Audit-Id: fed8526f-acab-41a0-987d-384de6af0eaf
	I0717 21:27:01.263104 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.263111 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.263123 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.263279 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.263681 1199225 pod_ready.go:92] pod "kube-proxy-244vk" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.263701 1199225 pod_ready.go:81] duration metric: took 58.114767ms waiting for pod "kube-proxy-244vk" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.263714 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.460099 1199225 request.go:628] Waited for 196.307948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-810165
	I0717 21:27:01.460163 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-810165
	I0717 21:27:01.460177 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.460187 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.460197 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.462946 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.462971 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.462980 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.462988 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.462995 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.463021 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.463036 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.463043 1199225 round_trippers.go:580]     Audit-Id: fd54f167-20db-4fa7-be65-2f99398ddc44
	I0717 21:27:01.463448 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-810165","namespace":"kube-system","uid":"66b548db-9ef8-4ca9-ac6c-e148a2b0d30a","resourceVersion":"309","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e0402d40ce8bc04850f4115dc87876","kubernetes.io/config.mirror":"19e0402d40ce8bc04850f4115dc87876","kubernetes.io/config.seen":"2023-07-17T21:26:15.462828972Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 21:27:01.660242 1199225 request.go:628] Waited for 196.358861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.660302 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:01.660308 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.660318 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.660329 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.662884 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:01.662911 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.662920 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.662927 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.662934 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.662941 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.662949 1199225 round_trippers.go:580]     Audit-Id: d21446d9-da80-4eb1-8836-3b31c84a8190
	I0717 21:27:01.662959 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.663126 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:01.663543 1199225 pod_ready.go:92] pod "kube-scheduler-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:01.663573 1199225 pod_ready.go:81] duration metric: took 399.84806ms waiting for pod "kube-scheduler-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:01.663585 1199225 pod_ready.go:38] duration metric: took 2.000681439s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:27:01.663608 1199225 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:27:01.663673 1199225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:27:01.676221 1199225 command_runner.go:130] > 1266
	I0717 21:27:01.677734 1199225 api_server.go:72] duration metric: took 32.963024005s to wait for apiserver process to appear ...
	I0717 21:27:01.677792 1199225 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:27:01.677825 1199225 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 21:27:01.688150 1199225 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 21:27:01.688230 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0717 21:27:01.688242 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.688254 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.688261 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.689593 1199225 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 21:27:01.689658 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.689674 1199225 round_trippers.go:580]     Content-Length: 263
	I0717 21:27:01.689681 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.689688 1199225 round_trippers.go:580]     Audit-Id: 93ecc460-cca9-4f72-922c-1530c20ca522
	I0717 21:27:01.689695 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.689702 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.689736 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.689758 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.689785 1199225 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0717 21:27:01.689870 1199225 api_server.go:141] control plane version: v1.27.3
	I0717 21:27:01.689888 1199225 api_server.go:131] duration metric: took 12.07471ms to wait for apiserver health ...
	I0717 21:27:01.689896 1199225 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:27:01.860335 1199225 request.go:628] Waited for 170.336305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:27:01.860390 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:27:01.860396 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:01.860405 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:01.860465 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:01.864284 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:01.864353 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:01.864376 1199225 round_trippers.go:580]     Audit-Id: e4fb70fd-8383-4295-bc2c-143df7eb0c0d
	I0717 21:27:01.864398 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:01.864436 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:01.864468 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:01.864492 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:01.864526 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:01 GMT
	I0717 21:27:01.865123 1199225 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"442","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 21:27:01.867587 1199225 system_pods.go:59] 8 kube-system pods found
	I0717 21:27:01.867634 1199225 system_pods.go:61] "coredns-5d78c9869d-sz6sv" [0cd666c9-e596-4d13-ba82-c51fdd049cd5] Running
	I0717 21:27:01.867645 1199225 system_pods.go:61] "etcd-multinode-810165" [940b7970-5f26-401c-9994-d77008b6d302] Running
	I0717 21:27:01.867651 1199225 system_pods.go:61] "kindnet-l6lkj" [3a967812-a0b8-450c-a7e7-5ca2bcd8d441] Running
	I0717 21:27:01.867661 1199225 system_pods.go:61] "kube-apiserver-multinode-810165" [a7633458-ccb5-468c-83f2-49d4163e531d] Running
	I0717 21:27:01.867667 1199225 system_pods.go:61] "kube-controller-manager-multinode-810165" [abb5ca6b-3ac7-4f15-9507-c3b23658399d] Running
	I0717 21:27:01.867675 1199225 system_pods.go:61] "kube-proxy-244vk" [3af224a1-d471-4cf5-b8dc-1abb030901c5] Running
	I0717 21:27:01.867681 1199225 system_pods.go:61] "kube-scheduler-multinode-810165" [66b548db-9ef8-4ca9-ac6c-e148a2b0d30a] Running
	I0717 21:27:01.867689 1199225 system_pods.go:61] "storage-provisioner" [12c56db5-ec1b-4e20-b798-cb2c02e5007f] Running
	I0717 21:27:01.867694 1199225 system_pods.go:74] duration metric: took 177.774885ms to wait for pod list to return data ...
	I0717 21:27:01.867703 1199225 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:27:02.060085 1199225 request.go:628] Waited for 192.281196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 21:27:02.060141 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 21:27:02.060147 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:02.060156 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:02.060169 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:02.062968 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:02.063003 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:02.063013 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:02.063020 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:02.063033 1199225 round_trippers.go:580]     Content-Length: 261
	I0717 21:27:02.063043 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:02 GMT
	I0717 21:27:02.063053 1199225 round_trippers.go:580]     Audit-Id: ba2035ee-a445-40f7-967b-a6dbf6f70465
	I0717 21:27:02.063061 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:02.063071 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:02.063094 1199225 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7a521215-553f-4a5e-ae2f-6f80a684fcb3","resourceVersion":"341","creationTimestamp":"2023-07-17T21:26:27Z"}}]}
	I0717 21:27:02.063299 1199225 default_sa.go:45] found service account: "default"
	I0717 21:27:02.063316 1199225 default_sa.go:55] duration metric: took 195.601627ms for default service account to be created ...
	I0717 21:27:02.063326 1199225 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:27:02.259715 1199225 request.go:628] Waited for 196.291981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:27:02.259774 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:27:02.259780 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:02.259789 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:02.259802 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:02.263553 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:02.263583 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:02.263597 1199225 round_trippers.go:580]     Audit-Id: 814a40a2-375c-4979-bdbe-b6563d9e16ed
	I0717 21:27:02.263604 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:02.263611 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:02.263618 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:02.263624 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:02.263632 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:02 GMT
	I0717 21:27:02.264671 1199225 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"442","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 21:27:02.267257 1199225 system_pods.go:86] 8 kube-system pods found
	I0717 21:27:02.267284 1199225 system_pods.go:89] "coredns-5d78c9869d-sz6sv" [0cd666c9-e596-4d13-ba82-c51fdd049cd5] Running
	I0717 21:27:02.267299 1199225 system_pods.go:89] "etcd-multinode-810165" [940b7970-5f26-401c-9994-d77008b6d302] Running
	I0717 21:27:02.267305 1199225 system_pods.go:89] "kindnet-l6lkj" [3a967812-a0b8-450c-a7e7-5ca2bcd8d441] Running
	I0717 21:27:02.267314 1199225 system_pods.go:89] "kube-apiserver-multinode-810165" [a7633458-ccb5-468c-83f2-49d4163e531d] Running
	I0717 21:27:02.267326 1199225 system_pods.go:89] "kube-controller-manager-multinode-810165" [abb5ca6b-3ac7-4f15-9507-c3b23658399d] Running
	I0717 21:27:02.267334 1199225 system_pods.go:89] "kube-proxy-244vk" [3af224a1-d471-4cf5-b8dc-1abb030901c5] Running
	I0717 21:27:02.267340 1199225 system_pods.go:89] "kube-scheduler-multinode-810165" [66b548db-9ef8-4ca9-ac6c-e148a2b0d30a] Running
	I0717 21:27:02.267351 1199225 system_pods.go:89] "storage-provisioner" [12c56db5-ec1b-4e20-b798-cb2c02e5007f] Running
	I0717 21:27:02.267359 1199225 system_pods.go:126] duration metric: took 204.021383ms to wait for k8s-apps to be running ...
	I0717 21:27:02.267368 1199225 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:27:02.267438 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:27:02.282169 1199225 system_svc.go:56] duration metric: took 14.787945ms WaitForService to wait for kubelet.
	I0717 21:27:02.282197 1199225 kubeadm.go:581] duration metric: took 33.567493209s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:27:02.282218 1199225 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:27:02.460656 1199225 request.go:628] Waited for 178.359032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 21:27:02.460736 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 21:27:02.460746 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:02.460759 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:02.460771 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:02.463482 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:02.463511 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:02.463523 1199225 round_trippers.go:580]     Audit-Id: 9c6fdeea-db56-4a3a-8a32-b0010538430e
	I0717 21:27:02.463533 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:02.463540 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:02.463546 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:02.463552 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:02.463562 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:02 GMT
	I0717 21:27:02.463724 1199225 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0717 21:27:02.464483 1199225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 21:27:02.464520 1199225 node_conditions.go:123] node cpu capacity is 2
	I0717 21:27:02.464537 1199225 node_conditions.go:105] duration metric: took 182.313783ms to run NodePressure ...
	I0717 21:27:02.464548 1199225 start.go:228] waiting for startup goroutines ...
	I0717 21:27:02.464562 1199225 start.go:233] waiting for cluster config update ...
	I0717 21:27:02.464572 1199225 start.go:242] writing updated cluster config ...
	I0717 21:27:02.467724 1199225 out.go:177] 
	I0717 21:27:02.470969 1199225 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:27:02.471068 1199225 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json ...
	I0717 21:27:02.473434 1199225 out.go:177] * Starting worker node multinode-810165-m02 in cluster multinode-810165
	I0717 21:27:02.475059 1199225 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:27:02.476789 1199225 out.go:177] * Pulling base image ...
	I0717 21:27:02.478364 1199225 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:27:02.478399 1199225 cache.go:57] Caching tarball of preloaded images
	I0717 21:27:02.478467 1199225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:27:02.478505 1199225 preload.go:174] Found /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0717 21:27:02.478515 1199225 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:27:02.478615 1199225 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json ...
	I0717 21:27:02.495106 1199225 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 21:27:02.495129 1199225 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 21:27:02.495148 1199225 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:27:02.495177 1199225 start.go:365] acquiring machines lock for multinode-810165-m02: {Name:mka28acd619bb863250bb67d555a473d38f0bf39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:27:02.495306 1199225 start.go:369] acquired machines lock for "multinode-810165-m02" in 103.844µs
	I0717 21:27:02.495337 1199225 start.go:93] Provisioning new machine with config: &{Name:multinode-810165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 21:27:02.495436 1199225 start.go:125] createHost starting for "m02" (driver="docker")
	I0717 21:27:02.497465 1199225 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 21:27:02.497577 1199225 start.go:159] libmachine.API.Create for "multinode-810165" (driver="docker")
	I0717 21:27:02.497605 1199225 client.go:168] LocalClient.Create starting
	I0717 21:27:02.497692 1199225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem
	I0717 21:27:02.497730 1199225 main.go:141] libmachine: Decoding PEM data...
	I0717 21:27:02.497751 1199225 main.go:141] libmachine: Parsing certificate...
	I0717 21:27:02.497809 1199225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem
	I0717 21:27:02.497832 1199225 main.go:141] libmachine: Decoding PEM data...
	I0717 21:27:02.497846 1199225 main.go:141] libmachine: Parsing certificate...
	I0717 21:27:02.498077 1199225 cli_runner.go:164] Run: docker network inspect multinode-810165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:27:02.515029 1199225 network_create.go:76] Found existing network {name:multinode-810165 subnet:0x400178b140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0717 21:27:02.515065 1199225 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-810165-m02" container
	I0717 21:27:02.515136 1199225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:27:02.532774 1199225 cli_runner.go:164] Run: docker volume create multinode-810165-m02 --label name.minikube.sigs.k8s.io=multinode-810165-m02 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:27:02.551849 1199225 oci.go:103] Successfully created a docker volume multinode-810165-m02
	I0717 21:27:02.551943 1199225 cli_runner.go:164] Run: docker run --rm --name multinode-810165-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-810165-m02 --entrypoint /usr/bin/test -v multinode-810165-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:27:03.113537 1199225 oci.go:107] Successfully prepared a docker volume multinode-810165-m02
	I0717 21:27:03.113589 1199225 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:27:03.113611 1199225 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:27:03.113713 1199225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-810165-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:27:07.316540 1199225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-810165-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.202778765s)
	I0717 21:27:07.316578 1199225 kic.go:199] duration metric: took 4.202963 seconds to extract preloaded images to volume
	W0717 21:27:07.316718 1199225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:27:07.316832 1199225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:27:07.387623 1199225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-810165-m02 --name multinode-810165-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-810165-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-810165-m02 --network multinode-810165 --ip 192.168.58.3 --volume multinode-810165-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:27:07.752986 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Running}}
	I0717 21:27:07.779875 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Status}}
	I0717 21:27:07.806970 1199225 cli_runner.go:164] Run: docker exec multinode-810165-m02 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:27:07.882132 1199225 oci.go:144] the created container "multinode-810165-m02" has a running status.
	I0717 21:27:07.882161 1199225 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa...
	I0717 21:27:08.252585 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 21:27:08.252674 1199225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:27:08.280300 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Status}}
	I0717 21:27:08.302038 1199225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:27:08.302057 1199225 kic_runner.go:114] Args: [docker exec --privileged multinode-810165-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:27:08.392911 1199225 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Status}}
	I0717 21:27:08.433398 1199225 machine.go:88] provisioning docker machine ...
	I0717 21:27:08.433426 1199225 ubuntu.go:169] provisioning hostname "multinode-810165-m02"
	I0717 21:27:08.433493 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:08.466114 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:27:08.466564 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34106 <nil> <nil>}
	I0717 21:27:08.466576 1199225 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-810165-m02 && echo "multinode-810165-m02" | sudo tee /etc/hostname
	I0717 21:27:08.468547 1199225 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 21:27:11.612555 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-810165-m02
	
	I0717 21:27:11.612637 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:11.631734 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:27:11.632170 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34106 <nil> <nil>}
	I0717 21:27:11.632194 1199225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-810165-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-810165-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-810165-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:27:11.762993 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:27:11.763027 1199225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:27:11.763050 1199225 ubuntu.go:177] setting up certificates
	I0717 21:27:11.763058 1199225 provision.go:83] configureAuth start
	I0717 21:27:11.763119 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165-m02
	I0717 21:27:11.787418 1199225 provision.go:138] copyHostCerts
	I0717 21:27:11.787464 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:27:11.787497 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:27:11.787509 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:27:11.787601 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:27:11.787691 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:27:11.787717 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:27:11.787721 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:27:11.787747 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:27:11.787788 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:27:11.787809 1199225 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:27:11.787813 1199225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:27:11.787837 1199225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:27:11.787886 1199225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.multinode-810165-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-810165-m02]
	I0717 21:27:11.993198 1199225 provision.go:172] copyRemoteCerts
	I0717 21:27:11.993287 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:27:11.993350 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.024447 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:27:12.124841 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 21:27:12.124928 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:27:12.155923 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 21:27:12.155988 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 21:27:12.184541 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 21:27:12.184602 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:27:12.213919 1199225 provision.go:86] duration metric: configureAuth took 450.846257ms
	I0717 21:27:12.213944 1199225 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:27:12.214145 1199225 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:27:12.214250 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.232536 1199225 main.go:141] libmachine: Using SSH client type: native
	I0717 21:27:12.232969 1199225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34106 <nil> <nil>}
	I0717 21:27:12.232984 1199225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:27:12.481690 1199225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:27:12.481710 1199225 machine.go:91] provisioned docker machine in 4.048293968s
	I0717 21:27:12.481719 1199225 client.go:171] LocalClient.Create took 9.984105783s
	I0717 21:27:12.481731 1199225 start.go:167] duration metric: libmachine.API.Create for "multinode-810165" took 9.984153889s
	I0717 21:27:12.481738 1199225 start.go:300] post-start starting for "multinode-810165-m02" (driver="docker")
	I0717 21:27:12.481747 1199225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:27:12.481825 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:27:12.481875 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.500719 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:27:12.596623 1199225 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:27:12.600741 1199225 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 21:27:12.600759 1199225 command_runner.go:130] > NAME="Ubuntu"
	I0717 21:27:12.600766 1199225 command_runner.go:130] > VERSION_ID="22.04"
	I0717 21:27:12.600772 1199225 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 21:27:12.600785 1199225 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 21:27:12.600791 1199225 command_runner.go:130] > ID=ubuntu
	I0717 21:27:12.600796 1199225 command_runner.go:130] > ID_LIKE=debian
	I0717 21:27:12.600802 1199225 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 21:27:12.600812 1199225 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 21:27:12.600823 1199225 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 21:27:12.600836 1199225 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 21:27:12.600841 1199225 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 21:27:12.600893 1199225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:27:12.600918 1199225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:27:12.600934 1199225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:27:12.600941 1199225 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:27:12.600954 1199225 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:27:12.601010 1199225 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:27:12.601093 1199225 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:27:12.601104 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /etc/ssl/certs/11358722.pem
	I0717 21:27:12.601250 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:27:12.612030 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:27:12.641499 1199225 start.go:303] post-start completed in 159.745434ms
	I0717 21:27:12.641900 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165-m02
	I0717 21:27:12.659689 1199225 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/config.json ...
	I0717 21:27:12.660119 1199225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:27:12.660172 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.683800 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:27:12.779409 1199225 command_runner.go:130] > 16%!
	(MISSING)I0717 21:27:12.779503 1199225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:27:12.785400 1199225 command_runner.go:130] > 164G
	I0717 21:27:12.785439 1199225 start.go:128] duration metric: createHost completed in 10.28999236s
	I0717 21:27:12.785448 1199225 start.go:83] releasing machines lock for "multinode-810165-m02", held for 10.290128056s
	I0717 21:27:12.785520 1199225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165-m02
	I0717 21:27:12.804514 1199225 out.go:177] * Found network options:
	I0717 21:27:12.806298 1199225 out.go:177]   - NO_PROXY=192.168.58.2
	W0717 21:27:12.808119 1199225 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 21:27:12.808174 1199225 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 21:27:12.808242 1199225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:27:12.808290 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.808561 1199225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:27:12.808616 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:27:12.827752 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:27:12.829140 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:27:13.088772 1199225 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 21:27:13.088854 1199225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:27:13.094157 1199225 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 21:27:13.094179 1199225 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 21:27:13.094188 1199225 command_runner.go:130] > Device: b3h/179d	Inode: 5189919     Links: 1
	I0717 21:27:13.094221 1199225 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:27:13.094237 1199225 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 21:27:13.094244 1199225 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 21:27:13.094253 1199225 command_runner.go:130] > Change: 2023-07-17 21:03:28.884783195 +0000
	I0717 21:27:13.094262 1199225 command_runner.go:130] >  Birth: 2023-07-17 21:03:28.880783199 +0000
	I0717 21:27:13.094565 1199225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:27:13.119886 1199225 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:27:13.119974 1199225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:27:13.160106 1199225 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 21:27:13.160137 1199225 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:27:13.160145 1199225 start.go:469] detecting cgroup driver to use...
	I0717 21:27:13.160175 1199225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:27:13.160233 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:27:13.180807 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:27:13.194035 1199225 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:27:13.194099 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:27:13.210121 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:27:13.227715 1199225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:27:13.326406 1199225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:27:13.342750 1199225 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 21:27:13.436903 1199225 docker.go:212] disabling docker service ...
	I0717 21:27:13.437013 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:27:13.458406 1199225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:27:13.473275 1199225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:27:13.579842 1199225 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 21:27:13.579954 1199225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:27:13.691150 1199225 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 21:27:13.691662 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:27:13.706705 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:27:13.728977 1199225 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 21:27:13.731092 1199225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 21:27:13.731228 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:27:13.747465 1199225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:27:13.747608 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:27:13.761190 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:27:13.775068 1199225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:27:13.787715 1199225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:27:13.801486 1199225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:27:13.811272 1199225 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 21:27:13.812592 1199225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:27:13.823108 1199225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:27:13.924460 1199225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:27:14.057878 1199225 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:27:14.058006 1199225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:27:14.063132 1199225 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 21:27:14.063155 1199225 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 21:27:14.063162 1199225 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0717 21:27:14.063190 1199225 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:27:14.063208 1199225 command_runner.go:130] > Access: 2023-07-17 21:27:14.043173879 +0000
	I0717 21:27:14.063216 1199225 command_runner.go:130] > Modify: 2023-07-17 21:27:14.043173879 +0000
	I0717 21:27:14.063221 1199225 command_runner.go:130] > Change: 2023-07-17 21:27:14.043173879 +0000
	I0717 21:27:14.063226 1199225 command_runner.go:130] >  Birth: -
	I0717 21:27:14.063320 1199225 start.go:537] Will wait 60s for crictl version
	I0717 21:27:14.063390 1199225 ssh_runner.go:195] Run: which crictl
	I0717 21:27:14.067624 1199225 command_runner.go:130] > /usr/bin/crictl
	I0717 21:27:14.068156 1199225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:27:14.113672 1199225 command_runner.go:130] > Version:  0.1.0
	I0717 21:27:14.113737 1199225 command_runner.go:130] > RuntimeName:  cri-o
	I0717 21:27:14.113756 1199225 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 21:27:14.113775 1199225 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 21:27:14.116455 1199225 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 21:27:14.116593 1199225 ssh_runner.go:195] Run: crio --version
	I0717 21:27:14.165175 1199225 command_runner.go:130] > crio version 1.24.6
	I0717 21:27:14.165239 1199225 command_runner.go:130] > Version:          1.24.6
	I0717 21:27:14.165272 1199225 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 21:27:14.165292 1199225 command_runner.go:130] > GitTreeState:     clean
	I0717 21:27:14.165314 1199225 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 21:27:14.165352 1199225 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 21:27:14.165373 1199225 command_runner.go:130] > Compiler:         gc
	I0717 21:27:14.165394 1199225 command_runner.go:130] > Platform:         linux/arm64
	I0717 21:27:14.165430 1199225 command_runner.go:130] > Linkmode:         dynamic
	I0717 21:27:14.165458 1199225 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 21:27:14.165479 1199225 command_runner.go:130] > SeccompEnabled:   true
	I0717 21:27:14.165507 1199225 command_runner.go:130] > AppArmorEnabled:  false
	I0717 21:27:14.167157 1199225 ssh_runner.go:195] Run: crio --version
	I0717 21:27:14.210263 1199225 command_runner.go:130] > crio version 1.24.6
	I0717 21:27:14.210329 1199225 command_runner.go:130] > Version:          1.24.6
	I0717 21:27:14.210353 1199225 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 21:27:14.210376 1199225 command_runner.go:130] > GitTreeState:     clean
	I0717 21:27:14.210399 1199225 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 21:27:14.210428 1199225 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 21:27:14.210447 1199225 command_runner.go:130] > Compiler:         gc
	I0717 21:27:14.210465 1199225 command_runner.go:130] > Platform:         linux/arm64
	I0717 21:27:14.210487 1199225 command_runner.go:130] > Linkmode:         dynamic
	I0717 21:27:14.210519 1199225 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 21:27:14.210542 1199225 command_runner.go:130] > SeccompEnabled:   true
	I0717 21:27:14.210562 1199225 command_runner.go:130] > AppArmorEnabled:  false
	I0717 21:27:14.214932 1199225 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 21:27:14.216829 1199225 out.go:177]   - env NO_PROXY=192.168.58.2
	I0717 21:27:14.218813 1199225 cli_runner.go:164] Run: docker network inspect multinode-810165 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:27:14.236461 1199225 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 21:27:14.241277 1199225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:27:14.255762 1199225 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165 for IP: 192.168.58.3
	I0717 21:27:14.255792 1199225 certs.go:190] acquiring lock for shared ca certs: {Name:mk8e5c72a7d7e3f9ffe23960b258dcb0da4448fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:27:14.255923 1199225 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key
	I0717 21:27:14.255967 1199225 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key
	I0717 21:27:14.255985 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 21:27:14.256001 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 21:27:14.256016 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 21:27:14.256031 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 21:27:14.256087 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem (1338 bytes)
	W0717 21:27:14.256121 1199225 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872_empty.pem, impossibly tiny 0 bytes
	I0717 21:27:14.256135 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 21:27:14.256160 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem (1082 bytes)
	I0717 21:27:14.256186 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:27:14.256213 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem (1675 bytes)
	I0717 21:27:14.256261 1199225 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:27:14.256296 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:27:14.256311 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem -> /usr/share/ca-certificates/1135872.pem
	I0717 21:27:14.256327 1199225 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> /usr/share/ca-certificates/11358722.pem
	I0717 21:27:14.256660 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:27:14.286385 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:27:14.315572 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:27:14.345241 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:27:14.375365 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:27:14.406141 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/1135872.pem --> /usr/share/ca-certificates/1135872.pem (1338 bytes)
	I0717 21:27:14.436120 1199225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /usr/share/ca-certificates/11358722.pem (1708 bytes)
	I0717 21:27:14.465979 1199225 ssh_runner.go:195] Run: openssl version
	I0717 21:27:14.472779 1199225 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 21:27:14.473181 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:27:14.484606 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:27:14.489348 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:27:14.489391 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:03 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:27:14.489440 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:27:14.497673 1199225 command_runner.go:130] > b5213941
	I0717 21:27:14.498105 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:27:14.510121 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1135872.pem && ln -fs /usr/share/ca-certificates/1135872.pem /etc/ssl/certs/1135872.pem"
	I0717 21:27:14.522285 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1135872.pem
	I0717 21:27:14.527294 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 21:10 /usr/share/ca-certificates/1135872.pem
	I0717 21:27:14.527345 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 21:10 /usr/share/ca-certificates/1135872.pem
	I0717 21:27:14.527404 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1135872.pem
	I0717 21:27:14.535947 1199225 command_runner.go:130] > 51391683
	I0717 21:27:14.536392 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1135872.pem /etc/ssl/certs/51391683.0"
	I0717 21:27:14.548595 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11358722.pem && ln -fs /usr/share/ca-certificates/11358722.pem /etc/ssl/certs/11358722.pem"
	I0717 21:27:14.560828 1199225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11358722.pem
	I0717 21:27:14.565965 1199225 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 21:10 /usr/share/ca-certificates/11358722.pem
	I0717 21:27:14.566039 1199225 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 21:10 /usr/share/ca-certificates/11358722.pem
	I0717 21:27:14.566106 1199225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11358722.pem
	I0717 21:27:14.576110 1199225 command_runner.go:130] > 3ec20f2e
	I0717 21:27:14.576186 1199225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11358722.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 21:27:14.588721 1199225 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:27:14.593190 1199225 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:27:14.593305 1199225 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:27:14.593453 1199225 ssh_runner.go:195] Run: crio config
	I0717 21:27:14.645000 1199225 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 21:27:14.645073 1199225 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 21:27:14.645098 1199225 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 21:27:14.645119 1199225 command_runner.go:130] > #
	I0717 21:27:14.645174 1199225 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 21:27:14.645202 1199225 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 21:27:14.645224 1199225 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 21:27:14.645249 1199225 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 21:27:14.645280 1199225 command_runner.go:130] > # reload'.
	I0717 21:27:14.645309 1199225 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 21:27:14.645333 1199225 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 21:27:14.645357 1199225 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 21:27:14.645391 1199225 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 21:27:14.645418 1199225 command_runner.go:130] > [crio]
	I0717 21:27:14.645441 1199225 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 21:27:14.645462 1199225 command_runner.go:130] > # containers images, in this directory.
	I0717 21:27:14.645502 1199225 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 21:27:14.645528 1199225 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 21:27:14.645840 1199225 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 21:27:14.645882 1199225 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 21:27:14.645902 1199225 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 21:27:14.645923 1199225 command_runner.go:130] > # storage_driver = "vfs"
	I0717 21:27:14.645960 1199225 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 21:27:14.645989 1199225 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 21:27:14.646010 1199225 command_runner.go:130] > # storage_option = [
	I0717 21:27:14.646286 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.646323 1199225 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 21:27:14.646347 1199225 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 21:27:14.646367 1199225 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 21:27:14.646401 1199225 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 21:27:14.646425 1199225 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 21:27:14.646445 1199225 command_runner.go:130] > # always happen on a node reboot
	I0717 21:27:14.646466 1199225 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 21:27:14.646500 1199225 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 21:27:14.646523 1199225 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 21:27:14.646547 1199225 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 21:27:14.646568 1199225 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 21:27:14.646610 1199225 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 21:27:14.646640 1199225 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 21:27:14.646660 1199225 command_runner.go:130] > # internal_wipe = true
	I0717 21:27:14.646681 1199225 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 21:27:14.646715 1199225 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 21:27:14.646737 1199225 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 21:27:14.646755 1199225 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 21:27:14.646776 1199225 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 21:27:14.646795 1199225 command_runner.go:130] > [crio.api]
	I0717 21:27:14.646823 1199225 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 21:27:14.646847 1199225 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 21:27:14.646869 1199225 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 21:27:14.646888 1199225 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 21:27:14.646920 1199225 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 21:27:14.646942 1199225 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 21:27:14.646959 1199225 command_runner.go:130] > # stream_port = "0"
	I0717 21:27:14.646980 1199225 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 21:27:14.646999 1199225 command_runner.go:130] > # stream_enable_tls = false
	I0717 21:27:14.647028 1199225 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 21:27:14.647054 1199225 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 21:27:14.647077 1199225 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 21:27:14.647098 1199225 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 21:27:14.647127 1199225 command_runner.go:130] > # minutes.
	I0717 21:27:14.647147 1199225 command_runner.go:130] > # stream_tls_cert = ""
	I0717 21:27:14.647167 1199225 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 21:27:14.647187 1199225 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 21:27:14.647225 1199225 command_runner.go:130] > # stream_tls_key = ""
	I0717 21:27:14.647248 1199225 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 21:27:14.647269 1199225 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 21:27:14.647291 1199225 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 21:27:14.647321 1199225 command_runner.go:130] > # stream_tls_ca = ""
	I0717 21:27:14.647345 1199225 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 21:27:14.647365 1199225 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 21:27:14.647388 1199225 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 21:27:14.647421 1199225 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 21:27:14.647483 1199225 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 21:27:14.647504 1199225 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 21:27:14.647531 1199225 command_runner.go:130] > [crio.runtime]
	I0717 21:27:14.647558 1199225 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 21:27:14.647578 1199225 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 21:27:14.647597 1199225 command_runner.go:130] > # "nofile=1024:2048"
	I0717 21:27:14.647696 1199225 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 21:27:14.647725 1199225 command_runner.go:130] > # default_ulimits = [
	I0717 21:27:14.647744 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.647766 1199225 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 21:27:14.647796 1199225 command_runner.go:130] > # no_pivot = false
	I0717 21:27:14.647819 1199225 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 21:27:14.647841 1199225 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 21:27:14.647861 1199225 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 21:27:14.647895 1199225 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 21:27:14.647916 1199225 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 21:27:14.647936 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 21:27:14.647954 1199225 command_runner.go:130] > # conmon = ""
	I0717 21:27:14.647975 1199225 command_runner.go:130] > # Cgroup setting for conmon
	I0717 21:27:14.648005 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 21:27:14.648029 1199225 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 21:27:14.648052 1199225 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 21:27:14.648072 1199225 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 21:27:14.648105 1199225 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 21:27:14.648126 1199225 command_runner.go:130] > # conmon_env = [
	I0717 21:27:14.648145 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.648169 1199225 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 21:27:14.648200 1199225 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 21:27:14.648223 1199225 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 21:27:14.648241 1199225 command_runner.go:130] > # default_env = [
	I0717 21:27:14.648259 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.648279 1199225 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 21:27:14.648306 1199225 command_runner.go:130] > # selinux = false
	I0717 21:27:14.648338 1199225 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 21:27:14.648360 1199225 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 21:27:14.648380 1199225 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 21:27:14.648408 1199225 command_runner.go:130] > # seccomp_profile = ""
	I0717 21:27:14.648432 1199225 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 21:27:14.648452 1199225 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 21:27:14.648474 1199225 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 21:27:14.648504 1199225 command_runner.go:130] > # which might increase security.
	I0717 21:27:14.648525 1199225 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 21:27:14.648544 1199225 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 21:27:14.648566 1199225 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 21:27:14.648600 1199225 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 21:27:14.648622 1199225 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 21:27:14.648642 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:27:14.648662 1199225 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 21:27:14.648682 1199225 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 21:27:14.648714 1199225 command_runner.go:130] > # the cgroup blockio controller.
	I0717 21:27:14.648732 1199225 command_runner.go:130] > # blockio_config_file = ""
	I0717 21:27:14.648754 1199225 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 21:27:14.648772 1199225 command_runner.go:130] > # irqbalance daemon.
	I0717 21:27:14.648801 1199225 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 21:27:14.648828 1199225 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 21:27:14.648850 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:27:14.648870 1199225 command_runner.go:130] > # rdt_config_file = ""
	I0717 21:27:14.648902 1199225 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 21:27:14.648921 1199225 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 21:27:14.648941 1199225 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 21:27:14.648959 1199225 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 21:27:14.648982 1199225 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 21:27:14.649010 1199225 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 21:27:14.649034 1199225 command_runner.go:130] > # will be added.
	I0717 21:27:14.649053 1199225 command_runner.go:130] > # default_capabilities = [
	I0717 21:27:14.649072 1199225 command_runner.go:130] > # 	"CHOWN",
	I0717 21:27:14.649090 1199225 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 21:27:14.649120 1199225 command_runner.go:130] > # 	"FSETID",
	I0717 21:27:14.649146 1199225 command_runner.go:130] > # 	"FOWNER",
	I0717 21:27:14.649179 1199225 command_runner.go:130] > # 	"SETGID",
	I0717 21:27:14.649209 1199225 command_runner.go:130] > # 	"SETUID",
	I0717 21:27:14.649242 1199225 command_runner.go:130] > # 	"SETPCAP",
	I0717 21:27:14.649260 1199225 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 21:27:14.649280 1199225 command_runner.go:130] > # 	"KILL",
	I0717 21:27:14.649301 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649332 1199225 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 21:27:14.649359 1199225 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 21:27:14.649379 1199225 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 21:27:14.649402 1199225 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 21:27:14.649573 1199225 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 21:27:14.649590 1199225 command_runner.go:130] > # default_sysctls = [
	I0717 21:27:14.649595 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649601 1199225 command_runner.go:130] > # List of devices on the host that a
	I0717 21:27:14.649621 1199225 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 21:27:14.649635 1199225 command_runner.go:130] > # allowed_devices = [
	I0717 21:27:14.649641 1199225 command_runner.go:130] > # 	"/dev/fuse",
	I0717 21:27:14.649645 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649655 1199225 command_runner.go:130] > # List of additional devices. specified as
	I0717 21:27:14.649712 1199225 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 21:27:14.649723 1199225 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 21:27:14.649731 1199225 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 21:27:14.649736 1199225 command_runner.go:130] > # additional_devices = [
	I0717 21:27:14.649741 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649747 1199225 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 21:27:14.649752 1199225 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 21:27:14.649756 1199225 command_runner.go:130] > # 	"/etc/cdi",
	I0717 21:27:14.649770 1199225 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 21:27:14.649782 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649790 1199225 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 21:27:14.649798 1199225 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 21:27:14.649806 1199225 command_runner.go:130] > # Defaults to false.
	I0717 21:27:14.649813 1199225 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 21:27:14.649824 1199225 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 21:27:14.649832 1199225 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 21:27:14.649849 1199225 command_runner.go:130] > # hooks_dir = [
	I0717 21:27:14.649868 1199225 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 21:27:14.649879 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.649886 1199225 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 21:27:14.649896 1199225 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 21:27:14.649903 1199225 command_runner.go:130] > # its default mounts from the following two files:
	I0717 21:27:14.649910 1199225 command_runner.go:130] > #
	I0717 21:27:14.649917 1199225 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 21:27:14.649925 1199225 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 21:27:14.649947 1199225 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 21:27:14.649965 1199225 command_runner.go:130] > #
	I0717 21:27:14.649974 1199225 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 21:27:14.649982 1199225 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 21:27:14.649989 1199225 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 21:27:14.649996 1199225 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 21:27:14.650000 1199225 command_runner.go:130] > #
	I0717 21:27:14.650005 1199225 command_runner.go:130] > # default_mounts_file = ""
	I0717 21:27:14.650012 1199225 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 21:27:14.650020 1199225 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 21:27:14.650034 1199225 command_runner.go:130] > # pids_limit = 0
	I0717 21:27:14.650047 1199225 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 21:27:14.650066 1199225 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 21:27:14.650080 1199225 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 21:27:14.650090 1199225 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 21:27:14.650098 1199225 command_runner.go:130] > # log_size_max = -1
	I0717 21:27:14.650107 1199225 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 21:27:14.650114 1199225 command_runner.go:130] > # log_to_journald = false
	I0717 21:27:14.650122 1199225 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 21:27:14.650141 1199225 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 21:27:14.650154 1199225 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 21:27:14.650161 1199225 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 21:27:14.650176 1199225 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 21:27:14.650189 1199225 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 21:27:14.650196 1199225 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 21:27:14.650203 1199225 command_runner.go:130] > # read_only = false
	I0717 21:27:14.650211 1199225 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 21:27:14.650219 1199225 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 21:27:14.650228 1199225 command_runner.go:130] > # live configuration reload.
	I0717 21:27:14.650234 1199225 command_runner.go:130] > # log_level = "info"
	I0717 21:27:14.650265 1199225 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 21:27:14.650280 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:27:14.650286 1199225 command_runner.go:130] > # log_filter = ""
	I0717 21:27:14.650297 1199225 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 21:27:14.650305 1199225 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 21:27:14.650313 1199225 command_runner.go:130] > # separated by comma.
	I0717 21:27:14.650319 1199225 command_runner.go:130] > # uid_mappings = ""
	I0717 21:27:14.650329 1199225 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 21:27:14.650346 1199225 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 21:27:14.650358 1199225 command_runner.go:130] > # separated by comma.
	I0717 21:27:14.650363 1199225 command_runner.go:130] > # gid_mappings = ""
	I0717 21:27:14.650372 1199225 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 21:27:14.650383 1199225 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 21:27:14.650390 1199225 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 21:27:14.650398 1199225 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 21:27:14.650406 1199225 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 21:27:14.650421 1199225 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 21:27:14.650428 1199225 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 21:27:14.650434 1199225 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 21:27:14.650442 1199225 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 21:27:14.650453 1199225 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 21:27:14.650461 1199225 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 21:27:14.650468 1199225 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 21:27:14.650476 1199225 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 21:27:14.650505 1199225 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 21:27:14.650515 1199225 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 21:27:14.650521 1199225 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 21:27:14.650526 1199225 command_runner.go:130] > # drop_infra_ctr = true
	I0717 21:27:14.650535 1199225 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 21:27:14.650545 1199225 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 21:27:14.650554 1199225 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 21:27:14.650559 1199225 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 21:27:14.650566 1199225 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 21:27:14.650572 1199225 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 21:27:14.650579 1199225 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 21:27:14.650587 1199225 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 21:27:14.650592 1199225 command_runner.go:130] > # pinns_path = ""
	I0717 21:27:14.650600 1199225 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 21:27:14.650607 1199225 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 21:27:14.650615 1199225 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 21:27:14.650621 1199225 command_runner.go:130] > # default_runtime = "runc"
	I0717 21:27:14.650627 1199225 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 21:27:14.650636 1199225 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 21:27:14.650646 1199225 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 21:27:14.650659 1199225 command_runner.go:130] > # creation as a file is not desired either.
	I0717 21:27:14.650670 1199225 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 21:27:14.650679 1199225 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 21:27:14.650685 1199225 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 21:27:14.650689 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.650696 1199225 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 21:27:14.650705 1199225 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 21:27:14.650713 1199225 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 21:27:14.650722 1199225 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 21:27:14.650727 1199225 command_runner.go:130] > #
	I0717 21:27:14.650738 1199225 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 21:27:14.650745 1199225 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 21:27:14.650754 1199225 command_runner.go:130] > #  runtime_type = "oci"
	I0717 21:27:14.650761 1199225 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 21:27:14.650771 1199225 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 21:27:14.650777 1199225 command_runner.go:130] > #  allowed_annotations = []
	I0717 21:27:14.650782 1199225 command_runner.go:130] > # Where:
	I0717 21:27:14.650788 1199225 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 21:27:14.650797 1199225 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 21:27:14.650804 1199225 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 21:27:14.650814 1199225 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 21:27:14.650819 1199225 command_runner.go:130] > #   in $PATH.
	I0717 21:27:14.650833 1199225 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 21:27:14.650839 1199225 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 21:27:14.650847 1199225 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 21:27:14.650855 1199225 command_runner.go:130] > #   state.
	I0717 21:27:14.650865 1199225 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 21:27:14.650876 1199225 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 21:27:14.650884 1199225 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 21:27:14.650891 1199225 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 21:27:14.650898 1199225 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 21:27:14.650910 1199225 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 21:27:14.650916 1199225 command_runner.go:130] > #   The currently recognized values are:
	I0717 21:27:14.650924 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 21:27:14.650935 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 21:27:14.650943 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 21:27:14.650951 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 21:27:14.650962 1199225 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 21:27:14.650972 1199225 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 21:27:14.650980 1199225 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 21:27:14.650988 1199225 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 21:27:14.650997 1199225 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 21:27:14.651002 1199225 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 21:27:14.651008 1199225 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 21:27:14.651016 1199225 command_runner.go:130] > runtime_type = "oci"
	I0717 21:27:14.651022 1199225 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 21:27:14.651027 1199225 command_runner.go:130] > runtime_config_path = ""
	I0717 21:27:14.651032 1199225 command_runner.go:130] > monitor_path = ""
	I0717 21:27:14.651037 1199225 command_runner.go:130] > monitor_cgroup = ""
	I0717 21:27:14.651042 1199225 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 21:27:14.651081 1199225 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 21:27:14.651110 1199225 command_runner.go:130] > # running containers
	I0717 21:27:14.651119 1199225 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 21:27:14.651126 1199225 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 21:27:14.651134 1199225 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 21:27:14.651141 1199225 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 21:27:14.651147 1199225 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 21:27:14.651153 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 21:27:14.651159 1199225 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 21:27:14.651165 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 21:27:14.651170 1199225 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 21:27:14.651176 1199225 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 21:27:14.651187 1199225 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 21:27:14.651196 1199225 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 21:27:14.651212 1199225 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 21:27:14.651226 1199225 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 21:27:14.651237 1199225 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 21:27:14.651244 1199225 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 21:27:14.651256 1199225 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 21:27:14.651269 1199225 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 21:27:14.651276 1199225 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 21:27:14.651291 1199225 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 21:27:14.651296 1199225 command_runner.go:130] > # Example:
	I0717 21:27:14.651308 1199225 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 21:27:14.651314 1199225 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 21:27:14.651320 1199225 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 21:27:14.651327 1199225 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 21:27:14.651336 1199225 command_runner.go:130] > # cpuset = 0
	I0717 21:27:14.651341 1199225 command_runner.go:130] > # cpushares = "0-1"
	I0717 21:27:14.651345 1199225 command_runner.go:130] > # Where:
	I0717 21:27:14.651357 1199225 command_runner.go:130] > # The workload name is workload-type.
	I0717 21:27:14.651367 1199225 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 21:27:14.651376 1199225 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 21:27:14.651383 1199225 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 21:27:14.651392 1199225 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 21:27:14.651400 1199225 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 21:27:14.651404 1199225 command_runner.go:130] > # 
	I0717 21:27:14.651415 1199225 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 21:27:14.651419 1199225 command_runner.go:130] > #
	I0717 21:27:14.651430 1199225 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 21:27:14.651438 1199225 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 21:27:14.651449 1199225 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 21:27:14.651457 1199225 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 21:27:14.651467 1199225 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 21:27:14.651472 1199225 command_runner.go:130] > [crio.image]
	I0717 21:27:14.651480 1199225 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 21:27:14.651486 1199225 command_runner.go:130] > # default_transport = "docker://"
	I0717 21:27:14.651496 1199225 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 21:27:14.651505 1199225 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 21:27:14.651513 1199225 command_runner.go:130] > # global_auth_file = ""
	I0717 21:27:14.651519 1199225 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 21:27:14.651526 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:27:14.651532 1199225 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 21:27:14.651542 1199225 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 21:27:14.651550 1199225 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 21:27:14.651559 1199225 command_runner.go:130] > # This option supports live configuration reload.
	I0717 21:27:14.651564 1199225 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 21:27:14.651571 1199225 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 21:27:14.651579 1199225 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 21:27:14.651588 1199225 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 21:27:14.651598 1199225 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 21:27:14.651603 1199225 command_runner.go:130] > # pause_command = "/pause"
	I0717 21:27:14.651611 1199225 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 21:27:14.651621 1199225 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 21:27:14.651631 1199225 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 21:27:14.651639 1199225 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 21:27:14.651649 1199225 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 21:27:14.651654 1199225 command_runner.go:130] > # signature_policy = ""
	I0717 21:27:14.651684 1199225 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 21:27:14.651694 1199225 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 21:27:14.651699 1199225 command_runner.go:130] > # changing them here.
	I0717 21:27:14.651705 1199225 command_runner.go:130] > # insecure_registries = [
	I0717 21:27:14.651712 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.651720 1199225 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 21:27:14.651728 1199225 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 21:27:14.651733 1199225 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 21:27:14.651740 1199225 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 21:27:14.651745 1199225 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 21:27:14.651754 1199225 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 21:27:14.651758 1199225 command_runner.go:130] > # CNI plugins.
	I0717 21:27:14.651763 1199225 command_runner.go:130] > [crio.network]
	I0717 21:27:14.651770 1199225 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 21:27:14.651777 1199225 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 21:27:14.651783 1199225 command_runner.go:130] > # cni_default_network = ""
	I0717 21:27:14.651796 1199225 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 21:27:14.651802 1199225 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 21:27:14.651814 1199225 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 21:27:14.651819 1199225 command_runner.go:130] > # plugin_dirs = [
	I0717 21:27:14.651824 1199225 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 21:27:14.651828 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.651835 1199225 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 21:27:14.651840 1199225 command_runner.go:130] > [crio.metrics]
	I0717 21:27:14.651846 1199225 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 21:27:14.651851 1199225 command_runner.go:130] > # enable_metrics = false
	I0717 21:27:14.651857 1199225 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 21:27:14.651866 1199225 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 21:27:14.651873 1199225 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 21:27:14.651886 1199225 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 21:27:14.651893 1199225 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 21:27:14.651898 1199225 command_runner.go:130] > # metrics_collectors = [
	I0717 21:27:14.651903 1199225 command_runner.go:130] > # 	"operations",
	I0717 21:27:14.651914 1199225 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 21:27:14.651920 1199225 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 21:27:14.651925 1199225 command_runner.go:130] > # 	"operations_errors",
	I0717 21:27:14.651930 1199225 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 21:27:14.651939 1199225 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 21:27:14.651945 1199225 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 21:27:14.651950 1199225 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 21:27:14.651960 1199225 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 21:27:14.651966 1199225 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 21:27:14.651978 1199225 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 21:27:14.651983 1199225 command_runner.go:130] > # 	"containers_oom_total",
	I0717 21:27:14.651993 1199225 command_runner.go:130] > # 	"containers_oom",
	I0717 21:27:14.651998 1199225 command_runner.go:130] > # 	"processes_defunct",
	I0717 21:27:14.652003 1199225 command_runner.go:130] > # 	"operations_total",
	I0717 21:27:14.652008 1199225 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 21:27:14.652014 1199225 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 21:27:14.652019 1199225 command_runner.go:130] > # 	"operations_errors_total",
	I0717 21:27:14.652025 1199225 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 21:27:14.652030 1199225 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 21:27:14.652038 1199225 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 21:27:14.652048 1199225 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 21:27:14.652054 1199225 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 21:27:14.652062 1199225 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 21:27:14.652066 1199225 command_runner.go:130] > # ]
	I0717 21:27:14.652076 1199225 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 21:27:14.652081 1199225 command_runner.go:130] > # metrics_port = 9090
	I0717 21:27:14.652087 1199225 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 21:27:14.652093 1199225 command_runner.go:130] > # metrics_socket = ""
	I0717 21:27:14.652099 1199225 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 21:27:14.652106 1199225 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 21:27:14.652114 1199225 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 21:27:14.652124 1199225 command_runner.go:130] > # certificate on any modification event.
	I0717 21:27:14.652129 1199225 command_runner.go:130] > # metrics_cert = ""
	I0717 21:27:14.652136 1199225 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 21:27:14.652142 1199225 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 21:27:14.652146 1199225 command_runner.go:130] > # metrics_key = ""
	I0717 21:27:14.652153 1199225 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 21:27:14.652158 1199225 command_runner.go:130] > [crio.tracing]
	I0717 21:27:14.652165 1199225 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 21:27:14.652170 1199225 command_runner.go:130] > # enable_tracing = false
	I0717 21:27:14.652177 1199225 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 21:27:14.652182 1199225 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 21:27:14.652188 1199225 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 21:27:14.652194 1199225 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 21:27:14.652202 1199225 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 21:27:14.652207 1199225 command_runner.go:130] > [crio.stats]
	I0717 21:27:14.652217 1199225 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 21:27:14.652226 1199225 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 21:27:14.652231 1199225 command_runner.go:130] > # stats_collection_period = 0
	I0717 21:27:14.654375 1199225 command_runner.go:130] ! time="2023-07-17 21:27:14.642414782Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 21:27:14.654402 1199225 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 21:27:14.654496 1199225 cni.go:84] Creating CNI manager for ""
	I0717 21:27:14.654508 1199225 cni.go:137] 2 nodes found, recommending kindnet
	I0717 21:27:14.654517 1199225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:27:14.654535 1199225 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-810165 NodeName:multinode-810165-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:27:14.654672 1199225 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-810165-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:27:14.654730 1199225 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-810165-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:27:14.654795 1199225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:27:14.664549 1199225 command_runner.go:130] > kubeadm
	I0717 21:27:14.664566 1199225 command_runner.go:130] > kubectl
	I0717 21:27:14.664572 1199225 command_runner.go:130] > kubelet
	I0717 21:27:14.665900 1199225 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:27:14.666010 1199225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 21:27:14.676806 1199225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 21:27:14.698536 1199225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:27:14.720446 1199225 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:27:14.725690 1199225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:27:14.739902 1199225 host.go:66] Checking if "multinode-810165" exists ...
	I0717 21:27:14.740197 1199225 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:27:14.740471 1199225 start.go:304] JoinCluster: &{Name:multinode-810165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-810165 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:27:14.740563 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 21:27:14.740617 1199225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:27:14.762952 1199225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:27:14.931841 1199225 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token b5wuo4.f8mw1bpm3uho1cf7 --discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa 
	I0717 21:27:14.935899 1199225 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 21:27:14.935942 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b5wuo4.f8mw1bpm3uho1cf7 --discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-810165-m02"
	I0717 21:27:14.986187 1199225 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 21:27:15.046391 1199225 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:27:15.046472 1199225 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-aws
	I0717 21:27:15.046510 1199225 command_runner.go:130] > OS: Linux
	I0717 21:27:15.046549 1199225 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 21:27:15.046566 1199225 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 21:27:15.046573 1199225 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 21:27:15.046582 1199225 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 21:27:15.046598 1199225 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 21:27:15.046607 1199225 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 21:27:15.046627 1199225 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 21:27:15.046642 1199225 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 21:27:15.046648 1199225 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 21:27:15.178757 1199225 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 21:27:15.178783 1199225 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 21:27:15.212953 1199225 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:27:15.213283 1199225 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:27:15.213308 1199225 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 21:27:15.317391 1199225 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 21:27:17.834743 1199225 command_runner.go:130] > This node has joined the cluster:
	I0717 21:27:17.834764 1199225 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 21:27:17.834772 1199225 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 21:27:17.834780 1199225 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 21:27:17.838387 1199225 command_runner.go:130] ! W0717 21:27:14.985614    1020 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 21:27:17.838418 1199225 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 21:27:17.838432 1199225 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:27:17.838450 1199225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b5wuo4.f8mw1bpm3uho1cf7 --discovery-token-ca-cert-hash sha256:114c2c6cf073ae167542850daf65adc7c2fffca2d9da9ec1b9de2454bc4224aa --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-810165-m02": (2.902494902s)
	I0717 21:27:17.838469 1199225 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 21:27:17.953361 1199225 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0717 21:27:18.084768 1199225 start.go:306] JoinCluster complete in 3.3442908s
	I0717 21:27:18.084793 1199225 cni.go:84] Creating CNI manager for ""
	I0717 21:27:18.084799 1199225 cni.go:137] 2 nodes found, recommending kindnet
	I0717 21:27:18.084875 1199225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:27:18.090208 1199225 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 21:27:18.090230 1199225 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0717 21:27:18.090238 1199225 command_runner.go:130] > Device: 3ah/58d	Inode: 5193619     Links: 1
	I0717 21:27:18.090246 1199225 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 21:27:18.090254 1199225 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0717 21:27:18.090260 1199225 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0717 21:27:18.090266 1199225 command_runner.go:130] > Change: 2023-07-17 21:03:29.560782622 +0000
	I0717 21:27:18.090272 1199225 command_runner.go:130] >  Birth: 2023-07-17 21:03:29.520782656 +0000
	I0717 21:27:18.090657 1199225 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 21:27:18.090671 1199225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:27:18.123962 1199225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:27:18.483251 1199225 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 21:27:18.492392 1199225 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 21:27:18.498246 1199225 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 21:27:18.559929 1199225 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 21:27:18.572448 1199225 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:27:18.572725 1199225 kapi.go:59] client config for multinode-810165: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:27:18.573043 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 21:27:18.573049 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:18.573059 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:18.573066 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:18.576035 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:18.576055 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:18.576063 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:18.576071 1199225 round_trippers.go:580]     Content-Length: 291
	I0717 21:27:18.576077 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:18 GMT
	I0717 21:27:18.576084 1199225 round_trippers.go:580]     Audit-Id: 85bcb8fe-1b6e-432c-9171-1995504e8f02
	I0717 21:27:18.576091 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:18.576097 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:18.576104 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:18.576335 1199225 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"079abaa7-e8db-4785-a68a-7cea17b9f8f9","resourceVersion":"446","creationTimestamp":"2023-07-17T21:26:15Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 21:27:18.576470 1199225 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-810165" context rescaled to 1 replicas
	I0717 21:27:18.576514 1199225 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 21:27:18.578304 1199225 out.go:177] * Verifying Kubernetes components...
	I0717 21:27:18.580451 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:27:18.665331 1199225 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:27:18.665669 1199225 kapi.go:59] client config for multinode-810165: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/multinode-810165/client.key", CAFile:"/home/jenkins/minikube-integration/16890-1130480/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 21:27:18.666109 1199225 node_ready.go:35] waiting up to 6m0s for node "multinode-810165-m02" to be "Ready" ...
	I0717 21:27:18.666198 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:18.666234 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:18.666261 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:18.666288 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:18.669548 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:18.669604 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:18.669625 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:18.669646 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:18.669680 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:18.669701 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:18.669720 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:18 GMT
	I0717 21:27:18.669740 1199225 round_trippers.go:580]     Audit-Id: d0c00cc4-881a-45c2-9c2b-24abc39ce0aa
	I0717 21:27:18.669911 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:19.171035 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:19.171103 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:19.171141 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:19.171167 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:19.174523 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:19.174592 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:19.174615 1199225 round_trippers.go:580]     Audit-Id: 1c5a7b29-0ce6-4603-96cb-989edebba1b3
	I0717 21:27:19.174644 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:19.174677 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:19.174701 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:19.174722 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:19.174757 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:19 GMT
	I0717 21:27:19.175872 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:19.670527 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:19.670552 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:19.670563 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:19.670570 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:19.673280 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:19.673369 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:19.673392 1199225 round_trippers.go:580]     Audit-Id: cf6ed7f3-0484-4ad3-9302-3684528fbe01
	I0717 21:27:19.673426 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:19.673439 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:19.673447 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:19.673454 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:19.673461 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:19 GMT
	I0717 21:27:19.673600 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:20.171111 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:20.171137 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:20.171148 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:20.171157 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:20.174155 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:20.174183 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:20.174193 1199225 round_trippers.go:580]     Audit-Id: 0bd0601f-8983-4e80-a9a7-3031c8437e8b
	I0717 21:27:20.174201 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:20.174208 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:20.174215 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:20.174223 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:20.174231 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:20 GMT
	I0717 21:27:20.174508 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:20.671197 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:20.671234 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:20.671245 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:20.671252 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:20.673842 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:20.673869 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:20.673878 1199225 round_trippers.go:580]     Audit-Id: 19a1d32e-5546-492b-b0be-c539ddd8962b
	I0717 21:27:20.673885 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:20.673892 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:20.673899 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:20.673907 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:20.673914 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:20 GMT
	I0717 21:27:20.674072 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:20.674440 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:21.171170 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:21.171251 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:21.171276 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:21.171341 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:21.173985 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:21.174008 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:21.174017 1199225 round_trippers.go:580]     Audit-Id: 36a03071-ecaf-43b0-8637-64627361442e
	I0717 21:27:21.174024 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:21.174031 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:21.174037 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:21.174044 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:21.174051 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:21 GMT
	I0717 21:27:21.174183 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:21.671400 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:21.671425 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:21.671441 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:21.671449 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:21.674062 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:21.674136 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:21.674145 1199225 round_trippers.go:580]     Audit-Id: 93b4b0fe-7fef-4727-bda6-bbac16b710e5
	I0717 21:27:21.674152 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:21.674159 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:21.674166 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:21.674177 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:21.674189 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:21 GMT
	I0717 21:27:21.674287 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:22.170746 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:22.170766 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:22.170776 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:22.170783 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:22.174560 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:22.174583 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:22.174591 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:22.174599 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:22.174606 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:22.174613 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:22.174619 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:22 GMT
	I0717 21:27:22.174626 1199225 round_trippers.go:580]     Audit-Id: 2af05083-6e73-45cb-a641-b2247ad485cc
	I0717 21:27:22.174704 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"484","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0717 21:27:22.670534 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:22.670559 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:22.670569 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:22.670577 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:22.673184 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:22.673209 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:22.673218 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:22.673226 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:22 GMT
	I0717 21:27:22.673233 1199225 round_trippers.go:580]     Audit-Id: 935ac2f9-998e-4d63-8d2e-fcdd1bc9c873
	I0717 21:27:22.673240 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:22.673247 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:22.673256 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:22.673386 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:23.171033 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:23.171055 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:23.171065 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:23.171073 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:23.174079 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:23.174101 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:23.174111 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:23 GMT
	I0717 21:27:23.174118 1199225 round_trippers.go:580]     Audit-Id: 92da42dc-8b37-4f60-a54f-9c3192415c02
	I0717 21:27:23.174124 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:23.174131 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:23.174138 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:23.174150 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:23.174261 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:23.174639 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:23.670614 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:23.670640 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:23.670652 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:23.670662 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:23.673801 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:23.673830 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:23.673846 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:23.673856 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:23 GMT
	I0717 21:27:23.673871 1199225 round_trippers.go:580]     Audit-Id: 7f4ff7cc-15e9-4099-bf26-4142353e62bb
	I0717 21:27:23.673885 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:23.673902 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:23.673916 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:23.674099 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:24.170786 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:24.170809 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:24.170819 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:24.170827 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:24.173885 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:24.173911 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:24.173926 1199225 round_trippers.go:580]     Audit-Id: 11a72b68-3056-4dad-aac5-a233844b7dd4
	I0717 21:27:24.173934 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:24.173941 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:24.173948 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:24.173955 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:24.173962 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:24 GMT
	I0717 21:27:24.174067 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:24.670551 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:24.670576 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:24.670587 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:24.670595 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:24.673188 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:24.673212 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:24.673589 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:24.673614 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:24 GMT
	I0717 21:27:24.673623 1199225 round_trippers.go:580]     Audit-Id: ed563052-a4c2-41e7-9def-b8b530d20100
	I0717 21:27:24.673631 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:24.673641 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:24.673648 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:24.679848 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:25.170534 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:25.170560 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:25.170573 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:25.170581 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:25.173467 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:25.173502 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:25.173512 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:25.173519 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:25.173526 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:25 GMT
	I0717 21:27:25.173533 1199225 round_trippers.go:580]     Audit-Id: 2eee7133-693a-4f23-b9fd-c06c66f43c9d
	I0717 21:27:25.173541 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:25.173549 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:25.173680 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:25.670523 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:25.670555 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:25.670565 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:25.670574 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:25.673285 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:25.673347 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:25.673369 1199225 round_trippers.go:580]     Audit-Id: 9f273799-b930-404e-b9ed-1059f1cb3739
	I0717 21:27:25.673392 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:25.673427 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:25.673450 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:25.673473 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:25.673495 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:25 GMT
	I0717 21:27:25.673717 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:25.674117 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:26.171364 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:26.171385 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:26.171395 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:26.171402 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:26.174053 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:26.174115 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:26.174133 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:26.174141 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:26.174148 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:26.174154 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:26 GMT
	I0717 21:27:26.174161 1199225 round_trippers.go:580]     Audit-Id: 66144d6a-cd57-41eb-8719-9bf6740e3d94
	I0717 21:27:26.174183 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:26.174520 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:26.671212 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:26.671242 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:26.671253 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:26.671260 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:26.673727 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:26.673750 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:26.673759 1199225 round_trippers.go:580]     Audit-Id: 36ccee16-d489-460b-b8a4-b21f923471dd
	I0717 21:27:26.673766 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:26.673773 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:26.673780 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:26.673786 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:26.673793 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:26 GMT
	I0717 21:27:26.673916 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:27.171303 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:27.171328 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:27.171339 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:27.171367 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:27.174014 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:27.174042 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:27.174051 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:27.174064 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:27.174071 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:27 GMT
	I0717 21:27:27.174077 1199225 round_trippers.go:580]     Audit-Id: b70f9d75-10d3-4f6e-8081-5860b1f3c3c5
	I0717 21:27:27.174084 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:27.174095 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:27.175960 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:27.670946 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:27.670974 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:27.670985 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:27.670993 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:27.673498 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:27.673588 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:27.673614 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:27.673660 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:27.673673 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:27 GMT
	I0717 21:27:27.673680 1199225 round_trippers.go:580]     Audit-Id: 23280677-86ab-4a92-bd26-a9881b5422ea
	I0717 21:27:27.673687 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:27.673694 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:27.673837 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"502","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0717 21:27:27.674232 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:28.171111 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:28.171140 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:28.171150 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:28.171165 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:28.174338 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:28.174362 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:28.174371 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:28.174379 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:28.174386 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:28 GMT
	I0717 21:27:28.174393 1199225 round_trippers.go:580]     Audit-Id: 12a7966c-755c-4585-93f2-c921669fc3be
	I0717 21:27:28.174402 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:28.174409 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:28.174502 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:28.671151 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:28.671178 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:28.671188 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:28.671196 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:28.673802 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:28.673828 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:28.673837 1199225 round_trippers.go:580]     Audit-Id: cb56a424-10c6-4ee0-9141-05e34afb32e5
	I0717 21:27:28.673866 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:28.673880 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:28.673887 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:28.673894 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:28.673901 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:28 GMT
	I0717 21:27:28.674181 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:29.171317 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:29.171344 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:29.171356 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:29.171363 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:29.174555 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:29.174579 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:29.174588 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:29.174596 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:29.174603 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:29 GMT
	I0717 21:27:29.174610 1199225 round_trippers.go:580]     Audit-Id: ea577af1-12b3-4fcc-a62a-3d8e1c23d05c
	I0717 21:27:29.174617 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:29.174632 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:29.175049 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:29.671234 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:29.671259 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:29.671270 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:29.671278 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:29.673885 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:29.673946 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:29.673967 1199225 round_trippers.go:580]     Audit-Id: 8b53b1e9-9774-4ad6-8326-f85f6e36cf7d
	I0717 21:27:29.673990 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:29.674025 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:29.674051 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:29.674074 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:29.674090 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:29 GMT
	I0717 21:27:29.674220 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:29.674598 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:30.170508 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:30.170531 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:30.170541 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:30.170549 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:30.173366 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:30.173394 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:30.173403 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:30.173410 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:30.173421 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:30.173428 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:30 GMT
	I0717 21:27:30.173435 1199225 round_trippers.go:580]     Audit-Id: 1b93da43-5366-4704-a91d-906314136bbc
	I0717 21:27:30.173448 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:30.173869 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:30.670447 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:30.670473 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:30.670484 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:30.670491 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:30.673059 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:30.673151 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:30.673182 1199225 round_trippers.go:580]     Audit-Id: a77de5fd-c27c-45c1-a551-ffd0b82a7562
	I0717 21:27:30.673190 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:30.673196 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:30.673203 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:30.673213 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:30.673223 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:30 GMT
	I0717 21:27:30.673368 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:31.170515 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:31.170537 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:31.170555 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:31.170563 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:31.174255 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:31.174283 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:31.174292 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:31.174299 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:31.174306 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:31.174314 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:31 GMT
	I0717 21:27:31.174320 1199225 round_trippers.go:580]     Audit-Id: 149d27dc-4447-484f-8d65-5073df32df9d
	I0717 21:27:31.174327 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:31.174434 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:31.670510 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:31.670536 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:31.670547 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:31.670554 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:31.673331 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:31.673364 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:31.673374 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:31.673382 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:31.673388 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:31.673395 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:31.673403 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:31 GMT
	I0717 21:27:31.673409 1199225 round_trippers.go:580]     Audit-Id: ed5bcd54-5d62-4131-a88e-86413341c50f
	I0717 21:27:31.673554 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:32.170535 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:32.170559 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:32.170569 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:32.170576 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:32.173903 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:32.173931 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:32.173940 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:32.173947 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:32.173953 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:32.173961 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:32 GMT
	I0717 21:27:32.173968 1199225 round_trippers.go:580]     Audit-Id: 04f7db95-62be-4f46-be51-aa413b992ea2
	I0717 21:27:32.173978 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:32.174065 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:32.174436 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:32.671147 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:32.671169 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:32.671179 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:32.671186 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:32.673726 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:32.673755 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:32.673765 1199225 round_trippers.go:580]     Audit-Id: 1b428f78-de19-468d-b682-bae55fee67ba
	I0717 21:27:32.673772 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:32.673779 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:32.673788 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:32.673797 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:32.673809 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:32 GMT
	I0717 21:27:32.673937 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:33.171226 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:33.171248 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:33.171259 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:33.171266 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:33.173821 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:33.173841 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:33.173850 1199225 round_trippers.go:580]     Audit-Id: 7fa31ff2-575a-420d-8be1-74833499dbd7
	I0717 21:27:33.173857 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:33.173863 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:33.173870 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:33.173876 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:33.173883 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:33 GMT
	I0717 21:27:33.174003 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:33.670819 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:33.670844 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:33.670854 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:33.670861 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:33.677858 1199225 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 21:27:33.677886 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:33.677895 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:33.677902 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:33.677909 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:33.677916 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:33.677927 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:33 GMT
	I0717 21:27:33.677933 1199225 round_trippers.go:580]     Audit-Id: 1cafe969-a857-457c-99b0-89859e02e44d
	I0717 21:27:33.678165 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:34.171293 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:34.171318 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:34.171329 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:34.171337 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:34.173810 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:34.173833 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:34.173841 1199225 round_trippers.go:580]     Audit-Id: 741f8132-c0a7-422f-a2b1-7089e2436f63
	I0717 21:27:34.173848 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:34.173855 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:34.173861 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:34.173868 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:34.173876 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:34 GMT
	I0717 21:27:34.173996 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:34.671189 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:34.671216 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:34.671232 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:34.671239 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:34.673638 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:34.673665 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:34.673674 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:34.673682 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:34 GMT
	I0717 21:27:34.673690 1199225 round_trippers.go:580]     Audit-Id: 82440179-9ca4-400d-ac6e-4c5ef31bdb4e
	I0717 21:27:34.673697 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:34.673707 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:34.673721 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:34.674002 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:34.674378 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:35.170653 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:35.170679 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:35.170704 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:35.170716 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:35.173711 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:35.173742 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:35.173771 1199225 round_trippers.go:580]     Audit-Id: b02600ee-c04f-4379-9edb-c83787d1a388
	I0717 21:27:35.173780 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:35.173788 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:35.173795 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:35.173805 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:35.173813 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:35 GMT
	I0717 21:27:35.173997 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:35.670488 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:35.670511 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:35.670522 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:35.670530 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:35.673354 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:35.673381 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:35.673391 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:35.673398 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:35.673405 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:35.673412 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:35.673419 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:35 GMT
	I0717 21:27:35.673425 1199225 round_trippers.go:580]     Audit-Id: f7e91f4a-5ea7-4b30-bad6-4b665c63ac3c
	I0717 21:27:35.673532 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:36.170495 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:36.170519 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:36.170529 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:36.170537 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:36.173734 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:36.173765 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:36.173774 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:36.173782 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:36.173789 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:36 GMT
	I0717 21:27:36.173796 1199225 round_trippers.go:580]     Audit-Id: efde53fe-5a27-41ec-8ea3-6cdfa8661bcd
	I0717 21:27:36.173807 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:36.173820 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:36.175689 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:36.670770 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:36.670795 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:36.670806 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:36.670814 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:36.673249 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:36.673270 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:36.673278 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:36 GMT
	I0717 21:27:36.673286 1199225 round_trippers.go:580]     Audit-Id: 2902509f-4cf6-42b6-9752-c98d9fdd45dd
	I0717 21:27:36.673292 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:36.673299 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:36.673305 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:36.673312 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:36.673447 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:37.171047 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:37.171080 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:37.171091 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:37.171098 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:37.174766 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:37.174794 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:37.174804 1199225 round_trippers.go:580]     Audit-Id: cab5e6ce-ef22-4767-be22-d2f21907dcd3
	I0717 21:27:37.174814 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:37.174820 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:37.174828 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:37.174838 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:37.174846 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:37 GMT
	I0717 21:27:37.175300 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:37.175751 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:37.671464 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:37.671488 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:37.671499 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:37.671507 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:37.674213 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:37.674235 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:37.674244 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:37.674251 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:37.674258 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:37.674265 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:37.674272 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:37 GMT
	I0717 21:27:37.674278 1199225 round_trippers.go:580]     Audit-Id: 61495a08-c837-44d7-a608-cc94b58202a1
	I0717 21:27:37.674459 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:38.171393 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:38.171454 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:38.171478 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:38.171503 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:38.175246 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:38.175269 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:38.175278 1199225 round_trippers.go:580]     Audit-Id: c92c9e9a-0175-428e-9f56-6c7af7973603
	I0717 21:27:38.175285 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:38.175292 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:38.175298 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:38.175305 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:38.175311 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:38 GMT
	I0717 21:27:38.175400 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:38.670424 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:38.670448 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:38.670462 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:38.670469 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:38.672993 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:38.673021 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:38.673030 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:38.673037 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:38.673045 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:38 GMT
	I0717 21:27:38.673052 1199225 round_trippers.go:580]     Audit-Id: 2d5e6b2e-bca7-4d70-bc8b-2b54b602c29e
	I0717 21:27:38.673059 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:38.673070 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:38.673199 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:39.170518 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:39.170549 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:39.170561 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:39.170568 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:39.173097 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:39.173119 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:39.173128 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:39.173135 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:39 GMT
	I0717 21:27:39.173141 1199225 round_trippers.go:580]     Audit-Id: ae6f3b23-5a71-4f85-b6c9-bb89fdf87477
	I0717 21:27:39.173148 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:39.173175 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:39.173183 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:39.173280 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:39.670993 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:39.671016 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:39.671026 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:39.671033 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:39.673609 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:39.673635 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:39.673644 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:39.673651 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:39 GMT
	I0717 21:27:39.673658 1199225 round_trippers.go:580]     Audit-Id: 43aa10bb-8bcd-43fb-a73f-35b4d382a8b5
	I0717 21:27:39.673665 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:39.673671 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:39.673678 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:39.673788 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:39.674165 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:40.170862 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:40.170889 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:40.170902 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:40.170914 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:40.173652 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:40.173674 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:40.173683 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:40.173690 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:40.173697 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:40.173704 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:40.173711 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:40 GMT
	I0717 21:27:40.173718 1199225 round_trippers.go:580]     Audit-Id: 6e9fba13-7d78-4596-b2a1-d7c480fad997
	I0717 21:27:40.173858 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:40.671484 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:40.671511 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:40.671522 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:40.671529 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:40.674015 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:40.674042 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:40.674051 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:40.674058 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:40.674065 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:40 GMT
	I0717 21:27:40.674073 1199225 round_trippers.go:580]     Audit-Id: 23dd9324-4c80-4429-a661-cd41b2189c81
	I0717 21:27:40.674080 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:40.674087 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:40.674190 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:41.171276 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:41.171302 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:41.171314 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:41.171340 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:41.173847 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:41.173876 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:41.173886 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:41.173894 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:41.173900 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:41.173911 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:41.173920 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:41 GMT
	I0717 21:27:41.173927 1199225 round_trippers.go:580]     Audit-Id: 0e751ab3-38d3-4ff6-a5bc-5c3cc7ac6287
	I0717 21:27:41.174158 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:41.670794 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:41.670821 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:41.670831 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:41.670839 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:41.673362 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:41.673383 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:41.673391 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:41.673398 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:41.673405 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:41.673411 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:41.673418 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:41 GMT
	I0717 21:27:41.673425 1199225 round_trippers.go:580]     Audit-Id: 64aed3ce-217d-4503-bc17-0aadcd4f219c
	I0717 21:27:41.673575 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:42.171419 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:42.171446 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:42.171457 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:42.171465 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:42.174502 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:42.174535 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:42.174555 1199225 round_trippers.go:580]     Audit-Id: e7f8b8b1-b83f-40d6-a21f-81e7a3ab224f
	I0717 21:27:42.174567 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:42.174579 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:42.174587 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:42.174607 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:42.174614 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:42 GMT
	I0717 21:27:42.174845 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:42.175316 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:42.671031 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:42.671054 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:42.671065 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:42.671072 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:42.673637 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:42.673667 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:42.673676 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:42.673684 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:42.673690 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:42.673698 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:42 GMT
	I0717 21:27:42.673707 1199225 round_trippers.go:580]     Audit-Id: 9868c55c-bbe6-4643-974a-6cf48f34f992
	I0717 21:27:42.673721 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:42.673976 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:43.170868 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:43.170894 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:43.170904 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:43.170912 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:43.174465 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:43.174485 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:43.174494 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:43.174501 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:43.174508 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:43 GMT
	I0717 21:27:43.174515 1199225 round_trippers.go:580]     Audit-Id: ecb08ee3-03b1-4372-bd5e-952039f31e45
	I0717 21:27:43.174522 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:43.174529 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:43.174641 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:43.670491 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:43.670515 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:43.670526 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:43.670534 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:43.672979 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:43.673005 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:43.673014 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:43.673020 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:43.673031 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:43.673039 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:43 GMT
	I0717 21:27:43.673046 1199225 round_trippers.go:580]     Audit-Id: 3a42848f-392b-4548-a77a-129cb758f47f
	I0717 21:27:43.673052 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:43.673236 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:44.170716 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:44.170739 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:44.170750 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:44.170758 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:44.173382 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:44.173411 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:44.173421 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:44.173428 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:44.173435 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:44.173442 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:44 GMT
	I0717 21:27:44.173448 1199225 round_trippers.go:580]     Audit-Id: e31ddf2e-1af0-40b6-9d12-75497edb6327
	I0717 21:27:44.173455 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:44.173587 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:44.671228 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:44.671253 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:44.671264 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:44.671272 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:44.673811 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:44.673840 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:44.673850 1199225 round_trippers.go:580]     Audit-Id: 1b363324-9131-443b-8079-43aa4afabcac
	I0717 21:27:44.673857 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:44.673864 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:44.673871 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:44.673884 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:44.673896 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:44 GMT
	I0717 21:27:44.674143 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:44.674538 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:45.170591 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:45.170619 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:45.170630 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:45.170638 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:45.174080 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:45.174106 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:45.174114 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:45 GMT
	I0717 21:27:45.174122 1199225 round_trippers.go:580]     Audit-Id: 34ad9982-b0a3-41a3-bab1-6c26e93268c3
	I0717 21:27:45.174129 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:45.174136 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:45.174143 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:45.174150 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:45.174555 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:45.670500 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:45.670526 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:45.670536 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:45.670543 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:45.673146 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:45.673211 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:45.673220 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:45.673228 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:45.673234 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:45 GMT
	I0717 21:27:45.673241 1199225 round_trippers.go:580]     Audit-Id: 6c64ca84-7379-432e-9db7-4ab6b354cd69
	I0717 21:27:45.673248 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:45.673254 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:45.673398 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:46.170832 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:46.170857 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:46.170867 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:46.170876 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:46.173394 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:46.173419 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:46.173427 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:46.173435 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:46 GMT
	I0717 21:27:46.173442 1199225 round_trippers.go:580]     Audit-Id: 278c536c-fb16-41c0-a570-1fceaf22468d
	I0717 21:27:46.173448 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:46.173455 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:46.173461 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:46.173756 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:46.671480 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:46.671505 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:46.671516 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:46.671523 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:46.674014 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:46.674070 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:46.674081 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:46.674088 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:46 GMT
	I0717 21:27:46.674095 1199225 round_trippers.go:580]     Audit-Id: 3ec9ab86-c1eb-4563-8433-9f2fbd6af20f
	I0717 21:27:46.674101 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:46.674108 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:46.674115 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:46.674221 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:46.674594 1199225 node_ready.go:58] node "multinode-810165-m02" has status "Ready":"False"
	I0717 21:27:47.170477 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:47.170501 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:47.170512 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:47.170520 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:47.176588 1199225 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 21:27:47.176616 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:47.176625 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:47.176632 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:47.176640 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:47.176646 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:47 GMT
	I0717 21:27:47.176653 1199225 round_trippers.go:580]     Audit-Id: 407730c2-8a06-4a81-af18-71597049ac84
	I0717 21:27:47.176660 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:47.176782 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:47.671419 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:47.671443 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:47.671453 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:47.671467 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:47.674269 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:47.674296 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:47.674305 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:47.674312 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:47.674319 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:47.674326 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:47 GMT
	I0717 21:27:47.674333 1199225 round_trippers.go:580]     Audit-Id: 4a8006a4-1e5d-443b-a3a5-69098b5744dc
	I0717 21:27:47.674339 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:47.674584 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:48.170466 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:48.170492 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:48.170503 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:48.170511 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:48.174054 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:48.174086 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:48.174096 1199225 round_trippers.go:580]     Audit-Id: 1c22487e-508b-46dc-827c-cb979c903934
	I0717 21:27:48.174104 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:48.174111 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:48.174118 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:48.174125 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:48.174131 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:48 GMT
	I0717 21:27:48.174218 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:48.671312 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:48.671336 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:48.671347 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:48.671354 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:48.673867 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:48.673895 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:48.673904 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:48.673912 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:48 GMT
	I0717 21:27:48.673918 1199225 round_trippers.go:580]     Audit-Id: 6dfe3ec1-6ca7-4aab-b069-cb90de8055cc
	I0717 21:27:48.673925 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:48.673932 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:48.673941 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:48.674090 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"509","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0717 21:27:49.170677 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:49.170703 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.170714 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.170723 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.173290 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.173312 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.173321 1199225 round_trippers.go:580]     Audit-Id: 8667997c-72f8-40fb-b213-eb6543e5c4b3
	I0717 21:27:49.173328 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.173335 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.173342 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.173351 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.173358 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.173453 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"531","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0717 21:27:49.173911 1199225 node_ready.go:49] node "multinode-810165-m02" has status "Ready":"True"
	I0717 21:27:49.173932 1199225 node_ready.go:38] duration metric: took 30.507786486s waiting for node "multinode-810165-m02" to be "Ready" ...
	I0717 21:27:49.173943 1199225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:27:49.174010 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 21:27:49.174022 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.174031 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.174038 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.177779 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:49.177811 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.177820 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.177827 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.177834 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.177841 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.177848 1199225 round_trippers.go:580]     Audit-Id: 1e88bf8d-ea3d-4af4-b9e4-791acc8e4380
	I0717 21:27:49.177857 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.178943 1199225 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"531"},"items":[{"metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"442","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0717 21:27:49.181946 1199225 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.182045 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-sz6sv
	I0717 21:27:49.182058 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.182069 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.182080 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.184936 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.184963 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.184972 1199225 round_trippers.go:580]     Audit-Id: 2f8420d2-aaf9-4a27-b8c9-97c893add1a5
	I0717 21:27:49.184979 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.184985 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.184992 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.184998 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.185005 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.185099 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-sz6sv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0cd666c9-e596-4d13-ba82-c51fdd049cd5","resourceVersion":"442","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"75f32b9c-0f0f-4b1d-9c1a-eff97f516d14\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 21:27:49.185695 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.185716 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.185725 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.185733 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.188267 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.188336 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.188352 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.188360 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.188367 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.188374 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.188381 1199225 round_trippers.go:580]     Audit-Id: 8902ff3f-d366-4e0b-a5d2-3c77dd2cadfd
	I0717 21:27:49.188401 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.188520 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:49.188934 1199225 pod_ready.go:92] pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.188951 1199225 pod_ready.go:81] duration metric: took 6.971101ms waiting for pod "coredns-5d78c9869d-sz6sv" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.188963 1199225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.189027 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-810165
	I0717 21:27:49.189037 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.189046 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.189054 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.191568 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.191627 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.191649 1199225 round_trippers.go:580]     Audit-Id: 82923e42-394e-438f-b4cc-2221b4046438
	I0717 21:27:49.191673 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.191707 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.191732 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.191766 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.191781 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.191881 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-810165","namespace":"kube-system","uid":"940b7970-5f26-401c-9994-d77008b6d302","resourceVersion":"327","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ce32c73a62db7bf84590abf5273c1610","kubernetes.io/config.mirror":"ce32c73a62db7bf84590abf5273c1610","kubernetes.io/config.seen":"2023-07-17T21:26:07.702630245Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 21:27:49.192362 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.192391 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.192401 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.192409 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.195005 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.195033 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.195042 1199225 round_trippers.go:580]     Audit-Id: 9103f796-4a62-46b1-be9c-686782439e71
	I0717 21:27:49.195057 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.195065 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.195072 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.195079 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.195087 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.195268 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:49.195669 1199225 pod_ready.go:92] pod "etcd-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.195687 1199225 pod_ready.go:81] duration metric: took 6.716479ms waiting for pod "etcd-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.195705 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.195781 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-810165
	I0717 21:27:49.195789 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.195797 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.195804 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.198535 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.198570 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.198580 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.198586 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.198594 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.198600 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.198610 1199225 round_trippers.go:580]     Audit-Id: 352cbff8-142a-452b-818b-4c15e8a1f965
	I0717 21:27:49.198617 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.198761 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-810165","namespace":"kube-system","uid":"a7633458-ccb5-468c-83f2-49d4163e531d","resourceVersion":"307","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"7ed7dd74add45e8e07e2f2a7e8e5f118","kubernetes.io/config.mirror":"7ed7dd74add45e8e07e2f2a7e8e5f118","kubernetes.io/config.seen":"2023-07-17T21:26:15.462821448Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 21:27:49.199358 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.199373 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.199383 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.199391 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.201860 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.201881 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.201890 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.201897 1199225 round_trippers.go:580]     Audit-Id: cbda382e-7c84-49f4-b647-cfb0765d8ff3
	I0717 21:27:49.201906 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.201913 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.201919 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.201926 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.202310 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:49.202751 1199225 pod_ready.go:92] pod "kube-apiserver-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.202770 1199225 pod_ready.go:81] duration metric: took 7.054958ms waiting for pod "kube-apiserver-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.202783 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.202844 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-810165
	I0717 21:27:49.202853 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.202862 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.202869 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.205631 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.205654 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.205663 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.205671 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.205678 1199225 round_trippers.go:580]     Audit-Id: 13261a43-90c5-4eb3-9dfc-afe219016f9e
	I0717 21:27:49.205684 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.205690 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.205697 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.205838 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-810165","namespace":"kube-system","uid":"abb5ca6b-3ac7-4f15-9507-c3b23658399d","resourceVersion":"328","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"627e33095190fdce831ec9aa3b244b71","kubernetes.io/config.mirror":"627e33095190fdce831ec9aa3b244b71","kubernetes.io/config.seen":"2023-07-17T21:26:15.462827413Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 21:27:49.206362 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.206371 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.206379 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.206386 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.208851 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.208875 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.208884 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.208892 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.208900 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.208907 1199225 round_trippers.go:580]     Audit-Id: f0160c7b-7af9-4e42-9ff3-deed389a32b1
	I0717 21:27:49.208916 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.208922 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.209033 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:49.209455 1199225 pod_ready.go:92] pod "kube-controller-manager-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.209474 1199225 pod_ready.go:81] duration metric: took 6.684111ms waiting for pod "kube-controller-manager-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.209491 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-244vk" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.370789 1199225 request.go:628] Waited for 161.218105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-244vk
	I0717 21:27:49.370871 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-244vk
	I0717 21:27:49.370881 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.370891 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.370898 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.373724 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.373750 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.373760 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.373767 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.373781 1199225 round_trippers.go:580]     Audit-Id: f3902ade-927a-4439-8237-bdbddbbc034d
	I0717 21:27:49.373790 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.373797 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.373808 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.374101 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-244vk","generateName":"kube-proxy-","namespace":"kube-system","uid":"3af224a1-d471-4cf5-b8dc-1abb030901c5","resourceVersion":"413","creationTimestamp":"2023-07-17T21:26:28Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd2fb151-6110-42d2-8b60-f21076800dc8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd2fb151-6110-42d2-8b60-f21076800dc8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 21:27:49.570773 1199225 request.go:628] Waited for 196.145717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.570846 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:49.570857 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.570867 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.570878 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.573410 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.573430 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.573439 1199225 round_trippers.go:580]     Audit-Id: 7d1697f5-b3a9-4d44-a505-0631300182c8
	I0717 21:27:49.573446 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.573452 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.573459 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.573466 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.573472 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.573595 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:49.574021 1199225 pod_ready.go:92] pod "kube-proxy-244vk" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.574037 1199225 pod_ready.go:81] duration metric: took 364.534478ms waiting for pod "kube-proxy-244vk" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.574048 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zg2pl" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.771402 1199225 request.go:628] Waited for 197.292817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zg2pl
	I0717 21:27:49.771460 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zg2pl
	I0717 21:27:49.771477 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.771487 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.771500 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.774123 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.774151 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.774160 1199225 round_trippers.go:580]     Audit-Id: 7429df06-30e8-405e-a41d-6d4a131297da
	I0717 21:27:49.774166 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.774174 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.774181 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.774188 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.774195 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.774338 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zg2pl","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8de40f-a21f-44b3-88cb-84e0c5036e73","resourceVersion":"497","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd2fb151-6110-42d2-8b60-f21076800dc8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd2fb151-6110-42d2-8b60-f21076800dc8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 21:27:49.971163 1199225 request.go:628] Waited for 196.322249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:49.971242 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165-m02
	I0717 21:27:49.971251 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:49.971261 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:49.971273 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:49.973844 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:49.973869 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:49.973879 1199225 round_trippers.go:580]     Audit-Id: 961151d8-20eb-4660-83da-99551cf6e741
	I0717 21:27:49.973885 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:49.973892 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:49.973903 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:49.973917 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:49.973925 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:49 GMT
	I0717 21:27:49.974029 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165-m02","uid":"38fea0dc-d23d-4674-b775-b6f368aa64d7","resourceVersion":"531","creationTimestamp":"2023-07-17T21:27:17Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:27:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0717 21:27:49.974401 1199225 pod_ready.go:92] pod "kube-proxy-zg2pl" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:49.974419 1199225 pod_ready.go:81] duration metric: took 400.363537ms waiting for pod "kube-proxy-zg2pl" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:49.974431 1199225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:50.170793 1199225 request.go:628] Waited for 196.289486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-810165
	I0717 21:27:50.170889 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-810165
	I0717 21:27:50.170896 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:50.170907 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:50.170919 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:50.173965 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:50.174064 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:50.174092 1199225 round_trippers.go:580]     Audit-Id: 64601c72-11bb-4347-af13-a799a1ccd20a
	I0717 21:27:50.174136 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:50.174160 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:50.174191 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:50.174241 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:50.174266 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:50 GMT
	I0717 21:27:50.174421 1199225 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-810165","namespace":"kube-system","uid":"66b548db-9ef8-4ca9-ac6c-e148a2b0d30a","resourceVersion":"309","creationTimestamp":"2023-07-17T21:26:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e0402d40ce8bc04850f4115dc87876","kubernetes.io/config.mirror":"19e0402d40ce8bc04850f4115dc87876","kubernetes.io/config.seen":"2023-07-17T21:26:15.462828972Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T21:26:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 21:27:50.371249 1199225 request.go:628] Waited for 196.331595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:50.371329 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-810165
	I0717 21:27:50.371341 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:50.371351 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:50.371358 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:50.375058 1199225 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 21:27:50.375084 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:50.375093 1199225 round_trippers.go:580]     Audit-Id: ba3101f8-2b03-4cac-8ba1-5b1b1755b176
	I0717 21:27:50.375100 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:50.375107 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:50.375113 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:50.375127 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:50.375134 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:50 GMT
	I0717 21:27:50.375245 1199225 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T21:26:12Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0717 21:27:50.375661 1199225 pod_ready.go:92] pod "kube-scheduler-multinode-810165" in "kube-system" namespace has status "Ready":"True"
	I0717 21:27:50.375680 1199225 pod_ready.go:81] duration metric: took 401.239081ms waiting for pod "kube-scheduler-multinode-810165" in "kube-system" namespace to be "Ready" ...
	I0717 21:27:50.375692 1199225 pod_ready.go:38] duration metric: took 1.20173889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:27:50.375711 1199225 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:27:50.375774 1199225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:27:50.389737 1199225 system_svc.go:56] duration metric: took 14.020118ms WaitForService to wait for kubelet.
	I0717 21:27:50.389765 1199225 kubeadm.go:581] duration metric: took 31.813213279s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:27:50.389785 1199225 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:27:50.571174 1199225 request.go:628] Waited for 181.313816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 21:27:50.571286 1199225 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 21:27:50.571299 1199225 round_trippers.go:469] Request Headers:
	I0717 21:27:50.571309 1199225 round_trippers.go:473]     Accept: application/json, */*
	I0717 21:27:50.571317 1199225 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0717 21:27:50.574126 1199225 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 21:27:50.574150 1199225 round_trippers.go:577] Response Headers:
	I0717 21:27:50.574159 1199225 round_trippers.go:580]     Audit-Id: 742b9d63-8225-4400-8ae2-09fb7872c414
	I0717 21:27:50.574166 1199225 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 21:27:50.574182 1199225 round_trippers.go:580]     Content-Type: application/json
	I0717 21:27:50.574193 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 18a505bc-e217-422f-9063-8d40850e68d2
	I0717 21:27:50.574201 1199225 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5030e93d-714a-4cba-a71b-5c3adac95200
	I0717 21:27:50.574212 1199225 round_trippers.go:580]     Date: Mon, 17 Jul 2023 21:27:50 GMT
	I0717 21:27:50.574449 1199225 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"multinode-810165","uid":"2149f128-4a27-4f50-9409-5b1a01bbf2da","resourceVersion":"426","creationTimestamp":"2023-07-17T21:26:12Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-810165","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-810165","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T21_26_16_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0717 21:27:50.575124 1199225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 21:27:50.575144 1199225 node_conditions.go:123] node cpu capacity is 2
	I0717 21:27:50.575156 1199225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 21:27:50.575161 1199225 node_conditions.go:123] node cpu capacity is 2
	I0717 21:27:50.575166 1199225 node_conditions.go:105] duration metric: took 185.376211ms to run NodePressure ...
	I0717 21:27:50.575179 1199225 start.go:228] waiting for startup goroutines ...
	I0717 21:27:50.575203 1199225 start.go:242] writing updated cluster config ...
	I0717 21:27:50.575525 1199225 ssh_runner.go:195] Run: rm -f paused
	I0717 21:27:50.635890 1199225 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 21:27:50.638947 1199225 out.go:177] * Done! kubectl is now configured to use "multinode-810165" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 21:26:59 multinode-810165 crio[902]: time="2023-07-17 21:26:59.843544756Z" level=info msg="Starting container: 17e9aec5899b83ff51b86601b0bdb8b04777f57708128bf247c4c94370a166a8" id=c2c234dc-709e-43fc-86ec-a6c925052427 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 21:26:59 multinode-810165 crio[902]: time="2023-07-17 21:26:59.846691640Z" level=info msg="Created container b9578283723b3640e18676c19217da0b3a88b133134a6ceacd9756d985a3abd7: kube-system/coredns-5d78c9869d-sz6sv/coredns" id=e56c2b43-3fce-4848-aabd-10f249c04bcc name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 21:26:59 multinode-810165 crio[902]: time="2023-07-17 21:26:59.847551881Z" level=info msg="Starting container: b9578283723b3640e18676c19217da0b3a88b133134a6ceacd9756d985a3abd7" id=73068e8e-2844-4fa5-9ca0-2a3bae9c8e88 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 21:26:59 multinode-810165 crio[902]: time="2023-07-17 21:26:59.860920923Z" level=info msg="Started container" PID=1927 containerID=17e9aec5899b83ff51b86601b0bdb8b04777f57708128bf247c4c94370a166a8 description=kube-system/storage-provisioner/storage-provisioner id=c2c234dc-709e-43fc-86ec-a6c925052427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9217b74240b9e743221faee4575462c2841eb7be67bd498d41f4f4e544f722b6
	Jul 17 21:26:59 multinode-810165 crio[902]: time="2023-07-17 21:26:59.877033855Z" level=info msg="Started container" PID=1921 containerID=b9578283723b3640e18676c19217da0b3a88b133134a6ceacd9756d985a3abd7 description=kube-system/coredns-5d78c9869d-sz6sv/coredns id=73068e8e-2844-4fa5-9ca0-2a3bae9c8e88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a783477b2072d11ed7874ba3a5b83a986f10e399eecf6168fd352b84708ee8ca
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.674210158Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-mdhfd/POD" id=100eaaf8-e29c-4590-a517-058756e94dc2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.674280550Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.688868338Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-mdhfd Namespace:default ID:b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb UID:7cb1bbb8-1754-4746-94d1-a1e11cfb804a NetNS:/var/run/netns/322f9251-bdc9-42be-b3a9-a8365a4be31e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.688911432Z" level=info msg="Adding pod default_busybox-67b7f59bb-mdhfd to CNI network \"kindnet\" (type=ptp)"
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.700237120Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-mdhfd Namespace:default ID:b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb UID:7cb1bbb8-1754-4746-94d1-a1e11cfb804a NetNS:/var/run/netns/322f9251-bdc9-42be-b3a9-a8365a4be31e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.700410396Z" level=info msg="Checking pod default_busybox-67b7f59bb-mdhfd for CNI network kindnet (type=ptp)"
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.715987066Z" level=info msg="Ran pod sandbox b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb with infra container: default/busybox-67b7f59bb-mdhfd/POD" id=100eaaf8-e29c-4590-a517-058756e94dc2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.716978851Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=02aa59b5-4b9e-4e6b-8d06-4b95b8a3b582 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.717282908Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=02aa59b5-4b9e-4e6b-8d06-4b95b8a3b582 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.719828357Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=c60f8b27-2cab-468e-9c6d-eeae29d89a83 name=/runtime.v1.ImageService/PullImage
	Jul 17 21:27:53 multinode-810165 crio[902]: time="2023-07-17 21:27:53.721119236Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 21:27:54 multinode-810165 crio[902]: time="2023-07-17 21:27:54.379131370Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.652561699Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=c60f8b27-2cab-468e-9c6d-eeae29d89a83 name=/runtime.v1.ImageService/PullImage
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.653768385Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c8db62de-a1a7-47bd-a068-352e84a65319 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.654433278Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c8db62de-a1a7-47bd-a068-352e84a65319 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.656196755Z" level=info msg="Creating container: default/busybox-67b7f59bb-mdhfd/busybox" id=43279528-168c-4108-8c48-137bd3ec982a name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.656290367Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.739486883Z" level=info msg="Created container 5391d957a772f17b182e1b9efbc4666b09a44740c74498f979db82bd64cf6597: default/busybox-67b7f59bb-mdhfd/busybox" id=43279528-168c-4108-8c48-137bd3ec982a name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.740305860Z" level=info msg="Starting container: 5391d957a772f17b182e1b9efbc4666b09a44740c74498f979db82bd64cf6597" id=4a5c6ee1-babc-45e1-ba55-41f94a223fbb name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 21:27:55 multinode-810165 crio[902]: time="2023-07-17 21:27:55.751855327Z" level=info msg="Started container" PID=2074 containerID=5391d957a772f17b182e1b9efbc4666b09a44740c74498f979db82bd64cf6597 description=default/busybox-67b7f59bb-mdhfd/busybox id=4a5c6ee1-babc-45e1-ba55-41f94a223fbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5391d957a772f       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   b4dad305b8cec       busybox-67b7f59bb-mdhfd
	b9578283723b3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   a783477b2072d       coredns-5d78c9869d-sz6sv
	17e9aec5899b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   9217b74240b9e       storage-provisioner
	e3757362364e6       fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a                                      About a minute ago   Running             kube-proxy                0                   414a192414d15       kube-proxy-244vk
	4529f5b6878cb       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   0a89358d92704       kindnet-l6lkj
	7fa4c144efb62       39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473                                      About a minute ago   Running             kube-apiserver            0                   3f648d0319094       kube-apiserver-multinode-810165
	21951bfe4ff99       ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8                                      About a minute ago   Running             kube-controller-manager   0                   093cf0f0c42f7       kube-controller-manager-multinode-810165
	b2ecc8a75707e       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   1e81fb3aa2442       etcd-multinode-810165
	d2967956211b7       bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540                                      About a minute ago   Running             kube-scheduler            0                   38cc5544cdf51       kube-scheduler-multinode-810165
	
	* 
	* ==> coredns [b9578283723b3640e18676c19217da0b3a88b133134a6ceacd9756d985a3abd7] <==
	* [INFO] 10.244.0.3:53598 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121075s
	[INFO] 10.244.1.2:58906 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131618s
	[INFO] 10.244.1.2:40815 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001414004s
	[INFO] 10.244.1.2:46073 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109579s
	[INFO] 10.244.1.2:34477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082962s
	[INFO] 10.244.1.2:43008 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001232556s
	[INFO] 10.244.1.2:48714 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077055s
	[INFO] 10.244.1.2:48920 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057527s
	[INFO] 10.244.1.2:52699 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067356s
	[INFO] 10.244.0.3:43091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089952s
	[INFO] 10.244.0.3:39640 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056254s
	[INFO] 10.244.0.3:60284 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046893s
	[INFO] 10.244.0.3:53686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065124s
	[INFO] 10.244.1.2:60025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010958s
	[INFO] 10.244.1.2:34212 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059783s
	[INFO] 10.244.1.2:58914 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060611s
	[INFO] 10.244.1.2:50334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005463s
	[INFO] 10.244.0.3:38245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107479s
	[INFO] 10.244.0.3:46028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121953s
	[INFO] 10.244.0.3:55627 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000142391s
	[INFO] 10.244.0.3:45439 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114765s
	[INFO] 10.244.1.2:48183 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146191s
	[INFO] 10.244.1.2:42164 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098642s
	[INFO] 10.244.1.2:39802 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000099347s
	[INFO] 10.244.1.2:33884 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073075s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-810165
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-810165
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=multinode-810165
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_26_16_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:26:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-810165
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:26:59 +0000   Mon, 17 Jul 2023 21:26:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:26:59 +0000   Mon, 17 Jul 2023 21:26:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:26:59 +0000   Mon, 17 Jul 2023 21:26:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:26:59 +0000   Mon, 17 Jul 2023 21:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-810165
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 0e2a01640ab7422b8c552a27a77233ee
	  System UUID:                66566234-cadf-46de-b793-2c34aad46be8
	  Boot ID:                    30727b23-eda1-49fe-8b46-0f11c052162c
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-mdhfd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5d78c9869d-sz6sv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     92s
	  kube-system                 etcd-multinode-810165                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 kindnet-l6lkj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      92s
	  kube-system                 kube-apiserver-multinode-810165             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-multinode-810165    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-proxy-244vk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-multinode-810165             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node multinode-810165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node multinode-810165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node multinode-810165 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node multinode-810165 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node multinode-810165 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node multinode-810165 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node multinode-810165 event: Registered Node multinode-810165 in Controller
	  Normal  NodeReady                61s                  kubelet          Node multinode-810165 status is now: NodeReady
	
	
	Name:               multinode-810165-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-810165-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:27:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-810165-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 21:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 21:27:49 +0000   Mon, 17 Jul 2023 21:27:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 21:27:49 +0000   Mon, 17 Jul 2023 21:27:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 21:27:49 +0000   Mon, 17 Jul 2023 21:27:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 21:27:49 +0000   Mon, 17 Jul 2023 21:27:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-810165-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022632Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c6df3c162e47fbb90e634bb8ef40dc
	  System UUID:                12bdecd9-b2e6-486c-bdb7-347f30f4cf92
	  Boot ID:                    30727b23-eda1-49fe-8b46-0f11c052162c
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-zhxtx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-gjllk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      43s
	  kube-system                 kube-proxy-zg2pl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-810165-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-810165-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-810165-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node multinode-810165-m02 event: Registered Node multinode-810165-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-810165-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001039] FS-Cache: O-key=[8] 'c5d6c90000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001009] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=0000000057de11d6
	[  +0.001024] FS-Cache: N-key=[8] 'c5d6c90000000000'
	[  +0.003172] FS-Cache: Duplicate cookie detected
	[  +0.000720] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=00000000a1180baf
	[  +0.001034] FS-Cache: O-key=[8] 'c5d6c90000000000'
	[  +0.000718] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=00000000acae0e0a
	[  +0.001105] FS-Cache: N-key=[8] 'c5d6c90000000000'
	[Jul17 21:14] FS-Cache: Duplicate cookie detected
	[  +0.000809] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001182] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=000000002f515dbf
	[  +0.001342] FS-Cache: O-key=[8] 'c4d6c90000000000'
	[  +0.000851] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001037] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=0000000057de11d6
	[  +0.001218] FS-Cache: N-key=[8] 'c4d6c90000000000'
	[  +0.402823] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001041] FS-Cache: O-cookie d=00000000ec1ea241{9p.inode} n=000000001063103b
	[  +0.001105] FS-Cache: O-key=[8] 'cad6c90000000000'
	[  +0.000728] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=00000000ec1ea241{9p.inode} n=00000000b6fac530
	[  +0.001155] FS-Cache: N-key=[8] 'cad6c90000000000'
	
	* 
	* ==> etcd [b2ecc8a75707eb1dc4046e0d77e0cf27e5004b5b3ece596d9b51bcd26d2a1bcb] <==
	* {"level":"info","ts":"2023-07-17T21:26:08.593Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T21:26:08.593Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T21:26:08.593Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T21:26:08.594Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T21:26:08.594Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T21:26:08.594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-07-17T21:26:08.594Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T21:26:09.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T21:26:09.339Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:26:09.341Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-810165 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T21:26:09.341Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T21:26:09.342Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-17T21:26:09.343Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:26:09.343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:26:09.343Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:26:09.353Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T21:26:09.365Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T21:26:09.365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T21:26:09.365Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  21:28:01 up  6:10,  0 users,  load average: 0.53, 1.09, 1.39
	Linux multinode-810165 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [4529f5b6878cbb5c6bfcc674c02fed8965d49f1a52b564be4a427f5d5daa47b6] <==
	* I0717 21:26:59.082110       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:26:59.082165       1 main.go:227] handling current node
	I0717 21:27:09.098344       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:09.098373       1 main.go:227] handling current node
	I0717 21:27:19.109370       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:19.109400       1 main.go:227] handling current node
	I0717 21:27:19.109411       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 21:27:19.109417       1 main.go:250] Node multinode-810165-m02 has CIDR [10.244.1.0/24] 
	I0717 21:27:19.109552       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0717 21:27:29.115069       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:29.115101       1 main.go:227] handling current node
	I0717 21:27:29.115113       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 21:27:29.115118       1 main.go:250] Node multinode-810165-m02 has CIDR [10.244.1.0/24] 
	I0717 21:27:39.120056       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:39.120089       1 main.go:227] handling current node
	I0717 21:27:39.120100       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 21:27:39.120107       1 main.go:250] Node multinode-810165-m02 has CIDR [10.244.1.0/24] 
	I0717 21:27:49.129968       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:49.129994       1 main.go:227] handling current node
	I0717 21:27:49.130005       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 21:27:49.130011       1 main.go:250] Node multinode-810165-m02 has CIDR [10.244.1.0/24] 
	I0717 21:27:59.134734       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 21:27:59.134759       1 main.go:227] handling current node
	I0717 21:27:59.134769       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 21:27:59.134775       1 main.go:250] Node multinode-810165-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [7fa4c144efb62f222704cd1994446732d77d17e415b1ccd4b8afc6c25d3280b1] <==
	* I0717 21:26:12.397605       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 21:26:12.397700       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 21:26:12.399121       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 21:26:12.399157       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 21:26:12.399354       1 aggregator.go:152] initial CRD sync complete...
	I0717 21:26:12.399370       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 21:26:12.399377       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 21:26:12.399383       1 cache.go:39] Caches are synced for autoregister controller
	I0717 21:26:12.771674       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 21:26:13.076051       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 21:26:13.080637       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 21:26:13.080727       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 21:26:13.603872       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 21:26:13.647357       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 21:26:13.729036       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 21:26:13.740959       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0717 21:26:13.742085       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 21:26:13.750364       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 21:26:14.306427       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 21:26:15.374509       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 21:26:15.392156       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 21:26:15.406900       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 21:26:27.870308       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 21:26:27.936418       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0717 21:27:58.393468       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:37674: write: broken pipe
	
	* 
	* ==> kube-controller-manager [21951bfe4ff99a1aad1bb15964ba0226a14089115f7a2508e2711a0ce19cec40] <==
	* I0717 21:26:27.254378       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0717 21:26:27.270156       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 21:26:27.303819       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 21:26:27.734886       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 21:26:27.741091       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 21:26:27.741119       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 21:26:27.875478       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 21:26:28.083312       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l6lkj"
	I0717 21:26:28.086166       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-244vk"
	I0717 21:26:28.311904       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 21:26:28.369773       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-r747k"
	I0717 21:26:28.433144       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-sz6sv"
	I0717 21:26:28.689947       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-r747k"
	I0717 21:27:02.166048       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0717 21:27:17.583167       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-810165-m02\" does not exist"
	I0717 21:27:17.597917       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gjllk"
	I0717 21:27:17.605736       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zg2pl"
	I0717 21:27:17.622561       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-810165-m02" podCIDRs=[10.244.1.0/24]
	I0717 21:27:22.169409       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-810165-m02"
	I0717 21:27:22.169597       1 event.go:307] "Event occurred" object="multinode-810165-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-810165-m02 event: Registered Node multinode-810165-m02 in Controller"
	W0717 21:27:49.061807       1 topologycache.go:232] Can't get CPU or zone information for multinode-810165-m02 node
	I0717 21:27:51.502184       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0717 21:27:51.518191       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-zhxtx"
	I0717 21:27:51.544839       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-mdhfd"
	I0717 21:27:52.185652       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-zhxtx" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-zhxtx"
	
	* 
	* ==> kube-proxy [e3757362364e6e0217474da2783a0d0c617ffb39a4f33e7b1d8d000fead16ca1] <==
	* I0717 21:26:30.261135       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0717 21:26:30.261273       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0717 21:26:30.261300       1 server_others.go:554] "Using iptables proxy"
	I0717 21:26:30.285426       1 server_others.go:192] "Using iptables Proxier"
	I0717 21:26:30.285465       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 21:26:30.285474       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 21:26:30.285487       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 21:26:30.285554       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 21:26:30.286110       1 server.go:658] "Version info" version="v1.27.3"
	I0717 21:26:30.286129       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 21:26:30.287587       1 config.go:188] "Starting service config controller"
	I0717 21:26:30.288607       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 21:26:30.288671       1 config.go:97] "Starting endpoint slice config controller"
	I0717 21:26:30.288678       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 21:26:30.292715       1 config.go:315] "Starting node config controller"
	I0717 21:26:30.292808       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 21:26:30.389416       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 21:26:30.389478       1 shared_informer.go:318] Caches are synced for service config
	I0717 21:26:30.393954       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d2967956211b749cc78ad924dad84167518a2166b48ad5011907c099a09ac3fe] <==
	* W0717 21:26:12.338897       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:26:12.339341       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 21:26:12.338934       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 21:26:12.339429       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 21:26:12.338985       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:26:12.339518       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 21:26:12.342276       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:26:12.342350       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 21:26:12.342449       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:26:12.342489       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:26:12.342575       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 21:26:12.342612       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 21:26:12.342701       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 21:26:12.342739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 21:26:12.342824       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:26:12.342860       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 21:26:12.342928       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:26:12.342964       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:26:13.365283       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 21:26:13.365318       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 21:26:13.367518       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:26:13.367553       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 21:26:13.374413       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:26:13.374541       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 21:26:15.028836       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 21:26:28 multinode-810165 kubelet[1391]: I0717 21:26:28.286682    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3af224a1-d471-4cf5-b8dc-1abb030901c5-lib-modules\") pod \"kube-proxy-244vk\" (UID: \"3af224a1-d471-4cf5-b8dc-1abb030901c5\") " pod="kube-system/kube-proxy-244vk"
	Jul 17 21:26:28 multinode-810165 kubelet[1391]: I0717 21:26:28.286709    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6d65\" (UniqueName: \"kubernetes.io/projected/3af224a1-d471-4cf5-b8dc-1abb030901c5-kube-api-access-g6d65\") pod \"kube-proxy-244vk\" (UID: \"3af224a1-d471-4cf5-b8dc-1abb030901c5\") " pod="kube-system/kube-proxy-244vk"
	Jul 17 21:26:29 multinode-810165 kubelet[1391]: E0717 21:26:29.387783    1391 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jul 17 21:26:29 multinode-810165 kubelet[1391]: E0717 21:26:29.387900    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3af224a1-d471-4cf5-b8dc-1abb030901c5-kube-proxy podName:3af224a1-d471-4cf5-b8dc-1abb030901c5 nodeName:}" failed. No retries permitted until 2023-07-17 21:26:29.887876423 +0000 UTC m=+14.544782713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3af224a1-d471-4cf5-b8dc-1abb030901c5-kube-proxy") pod "kube-proxy-244vk" (UID: "3af224a1-d471-4cf5-b8dc-1abb030901c5") : failed to sync configmap cache: timed out waiting for the condition
	Jul 17 21:26:30 multinode-810165 kubelet[1391]: W0717 21:26:30.014125    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/crio-414a192414d154e6c806d399397ae6b94963fe030d12f1a4eff87cb8a12c0bdd WatchSource:0}: Error finding container 414a192414d154e6c806d399397ae6b94963fe030d12f1a4eff87cb8a12c0bdd: Status 404 returned error can't find the container with id 414a192414d154e6c806d399397ae6b94963fe030d12f1a4eff87cb8a12c0bdd
	Jul 17 21:26:30 multinode-810165 kubelet[1391]: I0717 21:26:30.670361    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l6lkj" podStartSLOduration=2.67031843 podCreationTimestamp="2023-07-17 21:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:26:29.667517372 +0000 UTC m=+14.324423670" watchObservedRunningTime="2023-07-17 21:26:30.67031843 +0000 UTC m=+15.327224728"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.363762    1391 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.391654    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-244vk" podStartSLOduration=31.391615161 podCreationTimestamp="2023-07-17 21:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:26:30.671900656 +0000 UTC m=+15.328806954" watchObservedRunningTime="2023-07-17 21:26:59.391615161 +0000 UTC m=+44.048521467"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.392124    1391 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.394345    1391 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.443429    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/12c56db5-ec1b-4e20-b798-cb2c02e5007f-tmp\") pod \"storage-provisioner\" (UID: \"12c56db5-ec1b-4e20-b798-cb2c02e5007f\") " pod="kube-system/storage-provisioner"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.443484    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld627\" (UniqueName: \"kubernetes.io/projected/12c56db5-ec1b-4e20-b798-cb2c02e5007f-kube-api-access-ld627\") pod \"storage-provisioner\" (UID: \"12c56db5-ec1b-4e20-b798-cb2c02e5007f\") " pod="kube-system/storage-provisioner"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.443516    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cd666c9-e596-4d13-ba82-c51fdd049cd5-config-volume\") pod \"coredns-5d78c9869d-sz6sv\" (UID: \"0cd666c9-e596-4d13-ba82-c51fdd049cd5\") " pod="kube-system/coredns-5d78c9869d-sz6sv"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: I0717 21:26:59.443543    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkbc\" (UniqueName: \"kubernetes.io/projected/0cd666c9-e596-4d13-ba82-c51fdd049cd5-kube-api-access-bkkbc\") pod \"coredns-5d78c9869d-sz6sv\" (UID: \"0cd666c9-e596-4d13-ba82-c51fdd049cd5\") " pod="kube-system/coredns-5d78c9869d-sz6sv"
	Jul 17 21:26:59 multinode-810165 kubelet[1391]: W0717 21:26:59.754236    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/crio-a783477b2072d11ed7874ba3a5b83a986f10e399eecf6168fd352b84708ee8ca WatchSource:0}: Error finding container a783477b2072d11ed7874ba3a5b83a986f10e399eecf6168fd352b84708ee8ca: Status 404 returned error can't find the container with id a783477b2072d11ed7874ba3a5b83a986f10e399eecf6168fd352b84708ee8ca
	Jul 17 21:27:00 multinode-810165 kubelet[1391]: I0717 21:27:00.725317    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.725272659 podCreationTimestamp="2023-07-17 21:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:27:00.716581937 +0000 UTC m=+45.373488227" watchObservedRunningTime="2023-07-17 21:27:00.725272659 +0000 UTC m=+45.382178957"
	Jul 17 21:27:51 multinode-810165 kubelet[1391]: I0717 21:27:51.571813    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-sz6sv" podStartSLOduration=83.571776113 podCreationTimestamp="2023-07-17 21:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 21:27:00.736868099 +0000 UTC m=+45.393774397" watchObservedRunningTime="2023-07-17 21:27:51.571776113 +0000 UTC m=+96.228682402"
	Jul 17 21:27:51 multinode-810165 kubelet[1391]: I0717 21:27:51.571970    1391 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 21:27:51 multinode-810165 kubelet[1391]: W0717 21:27:51.578400    1391 reflector.go:533] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-810165" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-810165' and this object
	Jul 17 21:27:51 multinode-810165 kubelet[1391]: E0717 21:27:51.578442    1391 reflector.go:148] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-810165" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-810165' and this object
	Jul 17 21:27:51 multinode-810165 kubelet[1391]: I0717 21:27:51.679128    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n485z\" (UniqueName: \"kubernetes.io/projected/7cb1bbb8-1754-4746-94d1-a1e11cfb804a-kube-api-access-n485z\") pod \"busybox-67b7f59bb-mdhfd\" (UID: \"7cb1bbb8-1754-4746-94d1-a1e11cfb804a\") " pod="default/busybox-67b7f59bb-mdhfd"
	Jul 17 21:27:52 multinode-810165 kubelet[1391]: E0717 21:27:52.790368    1391 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 17 21:27:52 multinode-810165 kubelet[1391]: E0717 21:27:52.790428    1391 projected.go:198] Error preparing data for projected volume kube-api-access-n485z for pod default/busybox-67b7f59bb-mdhfd: failed to sync configmap cache: timed out waiting for the condition
	Jul 17 21:27:52 multinode-810165 kubelet[1391]: E0717 21:27:52.790518    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cb1bbb8-1754-4746-94d1-a1e11cfb804a-kube-api-access-n485z podName:7cb1bbb8-1754-4746-94d1-a1e11cfb804a nodeName:}" failed. No retries permitted until 2023-07-17 21:27:53.290494485 +0000 UTC m=+97.947400775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n485z" (UniqueName: "kubernetes.io/projected/7cb1bbb8-1754-4746-94d1-a1e11cfb804a-kube-api-access-n485z") pod "busybox-67b7f59bb-mdhfd" (UID: "7cb1bbb8-1754-4746-94d1-a1e11cfb804a") : failed to sync configmap cache: timed out waiting for the condition
	Jul 17 21:27:53 multinode-810165 kubelet[1391]: W0717 21:27:53.714082    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/crio-b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb WatchSource:0}: Error finding container b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb: Status 404 returned error can't find the container with id b4dad305b8cecfa43fac436d46fa723002e59f47b5bef59182e53a662f0602fb
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-810165 -n multinode-810165
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-810165 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (5.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.224903629.exe start -p running-upgrade-427237 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0717 21:43:24.313346 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.224903629.exe start -p running-upgrade-427237 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.581487006s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-427237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-427237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.954974219s)

                                                
                                                
-- stdout --
	* [running-upgrade-427237] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-427237 in cluster running-upgrade-427237
	* Pulling base image ...
	* Updating the running docker "running-upgrade-427237" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:43:26.500496 1258495 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:43:26.501380 1258495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:43:26.501399 1258495 out.go:309] Setting ErrFile to fd 2...
	I0717 21:43:26.501409 1258495 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:43:26.501701 1258495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:43:26.507984 1258495 out.go:303] Setting JSON to false
	I0717 21:43:26.509202 1258495 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23150,"bootTime":1689607057,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:43:26.509286 1258495 start.go:138] virtualization:  
	I0717 21:43:26.513206 1258495 out.go:177] * [running-upgrade-427237] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:43:26.515808 1258495 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0717 21:43:26.520399 1258495 notify.go:220] Checking for updates...
	I0717 21:43:26.525985 1258495 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:43:26.527828 1258495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:43:26.529210 1258495 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:43:26.530777 1258495 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:43:26.532354 1258495 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:43:26.534027 1258495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:43:26.536393 1258495 config.go:182] Loaded profile config "running-upgrade-427237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:43:26.538886 1258495 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 21:43:26.540945 1258495 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:43:26.630305 1258495 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:43:26.630555 1258495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:43:26.786662 1258495 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-07-17 21:43:26.773067056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:43:26.786772 1258495 docker.go:294] overlay module found
	I0717 21:43:26.788898 1258495 out.go:177] * Using the docker driver based on existing profile
	I0717 21:43:26.790658 1258495 start.go:298] selected driver: docker
	I0717 21:43:26.790685 1258495 start.go:880] validating driver "docker" against &{Name:running-upgrade-427237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-427237 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.230 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:43:26.790810 1258495 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:43:26.791105 1258495 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0717 21:43:26.792075 1258495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:43:26.872015 1258495 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-07-17 21:43:26.860196901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:43:26.872341 1258495 cni.go:84] Creating CNI manager for ""
	I0717 21:43:26.872357 1258495 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:43:26.872368 1258495 start_flags.go:319] config:
	{Name:running-upgrade-427237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-427237 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.230 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:43:26.875893 1258495 out.go:177] * Starting control plane node running-upgrade-427237 in cluster running-upgrade-427237
	I0717 21:43:26.877417 1258495 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:43:26.879066 1258495 out.go:177] * Pulling base image ...
	I0717 21:43:26.880864 1258495 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0717 21:43:26.880955 1258495 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0717 21:43:26.902720 1258495 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0717 21:43:26.902767 1258495 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0717 21:43:26.944710 1258495 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0717 21:43:26.944868 1258495 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/running-upgrade-427237/config.json ...
	I0717 21:43:26.945124 1258495 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:43:26.945199 1258495 start.go:365] acquiring machines lock for running-upgrade-427237: {Name:mkc4923f1f73a4a62619072bef2244681a9be0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.945262 1258495 start.go:369] acquired machines lock for "running-upgrade-427237" in 37.645µs
	I0717 21:43:26.945283 1258495 start.go:96] Skipping create...Using existing machine configuration
	I0717 21:43:26.945304 1258495 fix.go:54] fixHost starting: 
	I0717 21:43:26.945568 1258495 cli_runner.go:164] Run: docker container inspect running-upgrade-427237 --format={{.State.Status}}
	I0717 21:43:26.945813 1258495 cache.go:107] acquiring lock: {Name:mkedba646b95d771e43740702c8fb9cd60a42c79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.945875 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 21:43:26.945886 1258495 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.293µs
	I0717 21:43:26.945895 1258495 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 21:43:26.945909 1258495 cache.go:107] acquiring lock: {Name:mk93d25201f6f7bd6c0d281c5a805fa55d5e1773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.945944 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0717 21:43:26.945951 1258495 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 49.985µs
	I0717 21:43:26.945958 1258495 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0717 21:43:26.945964 1258495 cache.go:107] acquiring lock: {Name:mk98a8a10f9c96fe8cdb414f2ed4a9bf898bf68d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.945990 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0717 21:43:26.946002 1258495 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 35.454µs
	I0717 21:43:26.946009 1258495 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0717 21:43:26.946016 1258495 cache.go:107] acquiring lock: {Name:mk917f050c6f741aca6c74294dfa2e6d6cde4e05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.946042 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0717 21:43:26.946046 1258495 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 31.451µs
	I0717 21:43:26.946052 1258495 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0717 21:43:26.946059 1258495 cache.go:107] acquiring lock: {Name:mk81e0959e4c735549d416119f34c7e5992cad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.946085 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0717 21:43:26.946089 1258495 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 31.614µs
	I0717 21:43:26.946096 1258495 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0717 21:43:26.946102 1258495 cache.go:107] acquiring lock: {Name:mk7c264ab4e632424507af2b6bc961f4dd7ebce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.946127 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0717 21:43:26.946134 1258495 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 33.23µs
	I0717 21:43:26.946141 1258495 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0717 21:43:26.946152 1258495 cache.go:107] acquiring lock: {Name:mk66a448c5af4ad05b35b01bf89a6aec30c39cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.946181 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0717 21:43:26.946189 1258495 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 38.154µs
	I0717 21:43:26.946196 1258495 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0717 21:43:26.946204 1258495 cache.go:107] acquiring lock: {Name:mk5ff5a548a20c6c4daaa89362bbf23fed93cfc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:43:26.946235 1258495 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0717 21:43:26.946244 1258495 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 40.435µs
	I0717 21:43:26.946250 1258495 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0717 21:43:26.946256 1258495 cache.go:87] Successfully saved all images to host disk.
	I0717 21:43:26.970121 1258495 fix.go:102] recreateIfNeeded on running-upgrade-427237: state=Running err=<nil>
	W0717 21:43:26.970158 1258495 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 21:43:26.973183 1258495 out.go:177] * Updating the running docker "running-upgrade-427237" container ...
	I0717 21:43:26.975020 1258495 machine.go:88] provisioning docker machine ...
	I0717 21:43:26.975070 1258495 ubuntu.go:169] provisioning hostname "running-upgrade-427237"
	I0717 21:43:26.975161 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:26.997516 1258495 main.go:141] libmachine: Using SSH client type: native
	I0717 21:43:26.997967 1258495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I0717 21:43:26.997980 1258495 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-427237 && echo "running-upgrade-427237" | sudo tee /etc/hostname
	I0717 21:43:27.173680 1258495 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-427237
	
	I0717 21:43:27.173764 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:27.192680 1258495 main.go:141] libmachine: Using SSH client type: native
	I0717 21:43:27.193128 1258495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I0717 21:43:27.193186 1258495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-427237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-427237/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-427237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:43:27.344003 1258495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:43:27.344037 1258495 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:43:27.344078 1258495 ubuntu.go:177] setting up certificates
	I0717 21:43:27.344088 1258495 provision.go:83] configureAuth start
	I0717 21:43:27.344158 1258495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-427237
	I0717 21:43:27.361990 1258495 provision.go:138] copyHostCerts
	I0717 21:43:27.362059 1258495 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:43:27.362071 1258495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:43:27.362149 1258495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:43:27.362292 1258495 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:43:27.362303 1258495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:43:27.362330 1258495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:43:27.362389 1258495 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:43:27.362399 1258495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:43:27.362422 1258495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:43:27.362470 1258495 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-427237 san=[192.168.70.230 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-427237]
	I0717 21:43:27.670193 1258495 provision.go:172] copyRemoteCerts
	I0717 21:43:27.670263 1258495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:43:27.670333 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:27.689995 1258495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/running-upgrade-427237/id_rsa Username:docker}
	I0717 21:43:27.791266 1258495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:43:27.821092 1258495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 21:43:27.847154 1258495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:43:27.870934 1258495 provision.go:86] duration metric: configureAuth took 526.832514ms
	I0717 21:43:27.871002 1258495 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:43:27.871223 1258495 config.go:182] Loaded profile config "running-upgrade-427237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:43:27.871343 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:27.890794 1258495 main.go:141] libmachine: Using SSH client type: native
	I0717 21:43:27.891276 1258495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34212 <nil> <nil>}
	I0717 21:43:27.891299 1258495 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:43:28.520625 1258495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:43:28.520651 1258495 machine.go:91] provisioned docker machine in 1.54560743s
	I0717 21:43:28.520667 1258495 start.go:300] post-start starting for "running-upgrade-427237" (driver="docker")
	I0717 21:43:28.520717 1258495 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:43:28.520781 1258495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:43:28.520824 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:28.541322 1258495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/running-upgrade-427237/id_rsa Username:docker}
	I0717 21:43:28.642302 1258495 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:43:28.647535 1258495 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:43:28.647560 1258495 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:43:28.647572 1258495 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:43:28.647579 1258495 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0717 21:43:28.647588 1258495 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:43:28.647645 1258495 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:43:28.647732 1258495 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:43:28.647849 1258495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:43:28.656891 1258495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:43:28.680261 1258495 start.go:303] post-start completed in 159.555418ms
	I0717 21:43:28.680372 1258495 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:43:28.680430 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:28.716113 1258495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/running-upgrade-427237/id_rsa Username:docker}
	I0717 21:43:28.816181 1258495 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:43:28.822344 1258495 fix.go:56] fixHost completed within 1.877045632s
	I0717 21:43:28.822371 1258495 start.go:83] releasing machines lock for "running-upgrade-427237", held for 1.877093198s
	I0717 21:43:28.822460 1258495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-427237
	I0717 21:43:28.844640 1258495 ssh_runner.go:195] Run: cat /version.json
	I0717 21:43:28.844692 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:28.844969 1258495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:43:28.845018 1258495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-427237
	I0717 21:43:28.868840 1258495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/running-upgrade-427237/id_rsa Username:docker}
	I0717 21:43:28.875308 1258495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34212 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/running-upgrade-427237/id_rsa Username:docker}
	W0717 21:43:28.970488 1258495 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 21:43:28.970578 1258495 ssh_runner.go:195] Run: systemctl --version
	I0717 21:43:29.067075 1258495 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:43:29.213741 1258495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:43:29.221684 1258495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:43:29.246839 1258495 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:43:29.246950 1258495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:43:29.334421 1258495 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:43:29.334455 1258495 start.go:469] detecting cgroup driver to use...
	I0717 21:43:29.334514 1258495 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:43:29.334597 1258495 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:43:29.397034 1258495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:43:29.416391 1258495 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:43:29.416496 1258495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:43:29.435471 1258495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:43:29.451083 1258495 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 21:43:29.466898 1258495 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 21:43:29.466989 1258495 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:43:29.683296 1258495 docker.go:212] disabling docker service ...
	I0717 21:43:29.683415 1258495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:43:29.715545 1258495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:43:29.787324 1258495 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:43:30.061525 1258495 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:43:30.307966 1258495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:43:30.322702 1258495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:43:30.345139 1258495 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 21:43:30.345214 1258495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:43:30.365361 1258495 out.go:177] 
	W0717 21:43:30.367938 1258495 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 21:43:30.367961 1258495 out.go:239] * 
	* 
	W0717 21:43:30.368856 1258495 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 21:43:30.370246 1258495 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-427237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 21:43:30.407801333 +0000 UTC m=+2432.526098813
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-427237
helpers_test.go:235: (dbg) docker inspect running-upgrade-427237:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d",
	        "Created": "2023-07-17T21:42:39.530392929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1255091,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:42:39.937253205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d/hosts",
	        "LogPath": "/var/lib/docker/containers/273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d/273d19445ed3feef40111bf6fb78b2abd9ef9237d216033d89ffbd452faadb8d-json.log",
	        "Name": "/running-upgrade-427237",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-427237:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-427237",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c5f243b7cc5fc8851453e88602ed8c1f9c6360e515803db6fdb6a1a7a8daef3e-init/diff:/var/lib/docker/overlay2/91d35f45621d5d2dd44a0b45120a4090c5cf7778cba800ab447b251d1fccc8e8/diff:/var/lib/docker/overlay2/03ac8c4f9d92942af8b1c1de89006735cca4b3c9c0c9ea1bc83e1cd1fe182de9/diff:/var/lib/docker/overlay2/8297d472d0090845e6c2d1304c23d280e8346c3a5f39cf8edd482faf156fda00/diff:/var/lib/docker/overlay2/9a8b526a3ab6723c040befedb464f7f0b3433013c39a36a2d8caa9629034a950/diff:/var/lib/docker/overlay2/3c47ac2acb07d429bc70f26e468bbec9010bd84ebdb3cbd040b7ed91187e6aeb/diff:/var/lib/docker/overlay2/6c85ad2dbd769ea018b63a325a62a732227cf3f8dfafb8b9feee304e9470d9a5/diff:/var/lib/docker/overlay2/3e8ffb4a31f1e7b9a2618b408434846bfd6aa362ace569a7b3898851d4b20612/diff:/var/lib/docker/overlay2/b3650f93cfc75a97e53ce9f882e35092688e4195c923e4b2b9bd9e07b37db587/diff:/var/lib/docker/overlay2/01ab308c91367a4a0c6081df88a1a8ff701afe4d47b744394a0fd88a004eb23c/diff:/var/lib/docker/overlay2/298fa5
bdefc035f03069ac9140d5b5ac87f2b0c34dbcdd05d6a889490b493be5/diff:/var/lib/docker/overlay2/999867df6c4a118569581e66f79ea5de5a2353df5161d27829f25cbbd70d645d/diff:/var/lib/docker/overlay2/9fa94607f8fcb85ea0cb52039c8ba7cf6b65343f407541d908520aa600779f82/diff:/var/lib/docker/overlay2/56534230fdb04c5caacbf7cde3a86a75668bef5d914d6b5ffe39d2d4a397b7ce/diff:/var/lib/docker/overlay2/fe3ffe3e67da0c144d3349769243a547ece7678b211a8a463d3f4d45bc6ede81/diff:/var/lib/docker/overlay2/5732ec54541404bb073265f0229b2326e3a039bf5c7c9d13def4de1a5b84fadf/diff:/var/lib/docker/overlay2/56e7e55d2b74c42712b15aded09f676950b92b269cc326bd5f9f4ca40418d001/diff:/var/lib/docker/overlay2/caf9d841e0bb9c29a3a9970ce96f012d72cebe7ec7778b482fd7541794a274c2/diff:/var/lib/docker/overlay2/44cc670a62972dc69eff46fb5108cb23d1b93ba7423d811c2e9bad7b939b5ed0/diff:/var/lib/docker/overlay2/fe203ff49e2ebbd985d684f1c26502f50e7bcab8d8c84aadcb0e8eaf6eac4b91/diff:/var/lib/docker/overlay2/8f458612b4f51532d95f449276b7f27bc021a198ee4e92cddaa75c38a5c06df3/diff:/var/lib/d
ocker/overlay2/7cb7a2407922431574220d9e76f53691b88e4b3159f7f57817b048cfad1d8429/diff:/var/lib/docker/overlay2/11c6c370261d84a2c29b7ab1d06fa3ccf14c6341584f95a3d1c7ce680a0be572/diff:/var/lib/docker/overlay2/007df303c3aa9bd5c93929a39eb468e01ce5f76c4e07810c7a10ca85cd68b19b/diff:/var/lib/docker/overlay2/2d57688b976d6cfe79c541cc53b4f3a17b3c32789c4d9a0f59255a1e22831515/diff:/var/lib/docker/overlay2/80d13ad1063d5a4996a86b28866d13385c13933da2611383f2a757f91c57eae5/diff:/var/lib/docker/overlay2/5817d18d85463f86d9c0ccda927f27e5c0820e327f52491ba8b112b8aa7dae00/diff:/var/lib/docker/overlay2/4cd8dbb7b18acbc6f0d766cd51feca4e97a8a0b85cfea252af6afda6d95dd529/diff:/var/lib/docker/overlay2/962f10d97cdc090a6c25f15f5300fe3c50c49d5760fd58883fd47c071d6ea81d/diff:/var/lib/docker/overlay2/20a271029a3d980884a18ec45c3f66686090565c3690528bf35315f91feafe8f/diff:/var/lib/docker/overlay2/0a009456a7aa88a308139e8ca76c5ca6d20e2b740c172981ab23196b98cbe2b2/diff:/var/lib/docker/overlay2/cb61236b88cf9a800ea37c09b516cec012799a9f9407c856b8cbd585479
44f52/diff:/var/lib/docker/overlay2/cf486e6fc652434ae97fb53d173e8454d6b5cfbfcc6c43209f8d495cf256cce6/diff:/var/lib/docker/overlay2/aa76f2ab87c6539dcaefddd39f6e0887eb06a429c56a85f73163aa64dc6ed3b9/diff:/var/lib/docker/overlay2/93b75a2cd9f7f5595d53cea92230ccbd79506b5125f425b7491ae0c3bb13772e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c5f243b7cc5fc8851453e88602ed8c1f9c6360e515803db6fdb6a1a7a8daef3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c5f243b7cc5fc8851453e88602ed8c1f9c6360e515803db6fdb6a1a7a8daef3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c5f243b7cc5fc8851453e88602ed8c1f9c6360e515803db6fdb6a1a7a8daef3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-427237",
	                "Source": "/var/lib/docker/volumes/running-upgrade-427237/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-427237",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-427237",
	                "name.minikube.sigs.k8s.io": "running-upgrade-427237",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90f683e255fa352e8b53815ff78b759cd9aca8ee5bfff30ab66f973bfa33185f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34212"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34211"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/90f683e255fa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-427237": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.230"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "273d19445ed3",
	                        "running-upgrade-427237"
	                    ],
	                    "NetworkID": "4584185f918401f8cdaf6ba29625ae82c2ed3695e284b09d26199ca4564be38c",
	                    "EndpointID": "1612b382832f9a2d378a086f57867b22f1aab74879eafb21e9d23df749209a25",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.230",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:e6",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-427237 -n running-upgrade-427237
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-427237 -n running-upgrade-427237: exit status 4 (589.312421ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 21:43:30.872971 1259182 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-427237" does not appear in /home/jenkins/minikube-integration/16890-1130480/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-427237" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-427237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-427237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-427237: (3.078132663s)
--- FAIL: TestRunningBinaryUpgrade (69.36s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.2694672360.exe start -p missing-upgrade-886828 --memory=2200 --driver=docker  --container-runtime=crio
E0717 21:38:24.313190 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.2694672360.exe start -p missing-upgrade-886828 --memory=2200 --driver=docker  --container-runtime=crio: (2m16.021906367s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-886828
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-886828: (1.843865851s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-886828
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-886828 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-886828 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (37.867161732s)

                                                
                                                
-- stdout --
	* [missing-upgrade-886828] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-886828 in cluster missing-upgrade-886828
	* Pulling base image ...
	* docker "missing-upgrade-886828" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:40:30.662697 1245884 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:40:30.662897 1245884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:30.662909 1245884 out.go:309] Setting ErrFile to fd 2...
	I0717 21:40:30.662915 1245884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:40:30.663264 1245884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:40:30.663697 1245884 out.go:303] Setting JSON to false
	I0717 21:40:30.664757 1245884 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22974,"bootTime":1689607057,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:40:30.664826 1245884 start.go:138] virtualization:  
	I0717 21:40:30.669186 1245884 out.go:177] * [missing-upgrade-886828] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:40:30.671152 1245884 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:40:30.673041 1245884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:40:30.671275 1245884 notify.go:220] Checking for updates...
	I0717 21:40:30.677187 1245884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:40:30.679541 1245884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:40:30.681374 1245884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:40:30.683370 1245884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:40:30.685768 1245884 config.go:182] Loaded profile config "missing-upgrade-886828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:40:30.688132 1245884 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 21:40:30.690153 1245884 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:40:30.716035 1245884 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:40:30.716135 1245884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:40:30.810004 1245884 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-17 21:40:30.79861364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:40:30.810116 1245884 docker.go:294] overlay module found
	I0717 21:40:30.812551 1245884 out.go:177] * Using the docker driver based on existing profile
	I0717 21:40:30.814441 1245884 start.go:298] selected driver: docker
	I0717 21:40:30.814465 1245884 start.go:880] validating driver "docker" against &{Name:missing-upgrade-886828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-886828 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.45 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:40:30.814585 1245884 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:40:30.815400 1245884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:40:30.886691 1245884 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-17 21:40:30.876126708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:40:30.887007 1245884 cni.go:84] Creating CNI manager for ""
	I0717 21:40:30.887022 1245884 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:40:30.887033 1245884 start_flags.go:319] config:
	{Name:missing-upgrade-886828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-886828 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.45 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:40:30.889382 1245884 out.go:177] * Starting control plane node missing-upgrade-886828 in cluster missing-upgrade-886828
	I0717 21:40:30.891424 1245884 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:40:30.893381 1245884 out.go:177] * Pulling base image ...
	I0717 21:40:30.895454 1245884 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0717 21:40:30.895535 1245884 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0717 21:40:30.915654 1245884 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0717 21:40:30.915823 1245884 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0717 21:40:30.916334 1245884 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0717 21:40:30.967664 1245884 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0717 21:40:30.967803 1245884 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/missing-upgrade-886828/config.json ...
	I0717 21:40:30.968138 1245884 cache.go:107] acquiring lock: {Name:mkedba646b95d771e43740702c8fb9cd60a42c79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.968163 1245884 cache.go:107] acquiring lock: {Name:mk81e0959e4c735549d416119f34c7e5992cad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.968219 1245884 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 21:40:30.968227 1245884 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.098µs
	I0717 21:40:30.968236 1245884 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 21:40:30.968243 1245884 cache.go:107] acquiring lock: {Name:mk7c264ab4e632424507af2b6bc961f4dd7ebce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.968309 1245884 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0717 21:40:30.968324 1245884 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 21:40:30.968459 1245884 cache.go:107] acquiring lock: {Name:mk93d25201f6f7bd6c0d281c5a805fa55d5e1773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.968627 1245884 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0717 21:40:30.968715 1245884 cache.go:107] acquiring lock: {Name:mk98a8a10f9c96fe8cdb414f2ed4a9bf898bf68d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.968791 1245884 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0717 21:40:30.969012 1245884 cache.go:107] acquiring lock: {Name:mk66a448c5af4ad05b35b01bf89a6aec30c39cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.969466 1245884 cache.go:107] acquiring lock: {Name:mk917f050c6f741aca6c74294dfa2e6d6cde4e05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.969818 1245884 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 21:40:30.970341 1245884 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0717 21:40:30.970477 1245884 cache.go:107] acquiring lock: {Name:mk5ff5a548a20c6c4daaa89362bbf23fed93cfc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:30.970558 1245884 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 21:40:30.971336 1245884 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0717 21:40:30.971700 1245884 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 21:40:30.971847 1245884 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 21:40:30.972638 1245884 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 21:40:30.972739 1245884 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0717 21:40:30.972787 1245884 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0717 21:40:30.973629 1245884 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0717 21:40:31.362805 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W0717 21:40:31.451162 1245884 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0717 21:40:31.451252 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0717 21:40:31.454191 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0717 21:40:31.457512 1245884 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0717 21:40:31.457572 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W0717 21:40:31.459363 1245884 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0717 21:40:31.459430 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0717 21:40:31.463140 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I0717 21:40:31.469758 1245884 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0717 21:40:31.591187 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0717 21:40:31.591221 1245884 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 622.976659ms
	I0717 21:40:31.591235 1245884 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  113.35 KiB / 287.99 MiB [] 0.04% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  9.91 MiB / 287.99 MiB [>_] 3.44% ? p/s ?I0717 21:40:32.011409 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0717 21:40:32.011480 1245884 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.041007442s
	I0717 21:40:32.011509 1245884 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0717 21:40:32.158713 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0717 21:40:32.158786 1245884 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.189469591s
	I0717 21:40:32.158813 1245884 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0717 21:40:32.170590 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0717 21:40:32.170613 1245884 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.201901285s
	I0717 21:40:32.170626 1245884 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  18.30 MiB / 287.99 MiB  6.35% 30.50 MiB I0717 21:40:32.363985 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0717 21:40:32.364017 1245884 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.395561566s
	I0717 21:40:32.364031 1245884 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 30.50 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 30.50 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 29.36 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 29.36 MiB     > gcr.io/k8s-minikube/kicbase...:  29.92 MiB / 287.99 MiB  10.39% 29.36 MiBI0717 21:40:33.263361 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0717 21:40:33.263390 1245884 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.295237768s
	I0717 21:40:33.263403 1245884 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 29.39 MiB    > gcr.io/k8s-minikube/kicbase...:  44.00 MiB / 287.99 MiB  15.28% 29.39 MiB    > gcr.io/k8s-minikube/kicbase...:  57.29 MiB / 287.99 MiB  19.89% 29.39 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 30.07 MiB    > gcr.io/k8s-minikube/kicbase...:  68.17 MiB / 287.99 MiB  23.67% 30.07 MiB    > gcr.io/k8s-minikube/kicbase...:  79.50 MiB / 287.99 MiB  27.60% 30.07 MiB    > gcr.io/k8s-minikube/kicbase...:  91.79 MiB / 287.99 MiB  31.87% 30.71 MiB    > gcr.io/k8s-minikube/kicbase...:  101.01 MiB / 287.99 MiB  35.07% 30.71 Mi    > gcr.io/k8s-minikube/kicbase...:  115.79 MiB / 287.99 MiB  40.21% 30.71 Mi    > gcr.io/k8s-minikube/kicbase...:  131.32 MiB / 287.99 MiB  45.60% 32.98 MiI0717 21:40:35.231936 1245884 cache.go:157] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0717 21:40:35.231961 1245884 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.262968747s
	I0717 21:40:35.231974 1245884 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0717 21:40:35.231992 1245884 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  141.65 MiB / 287.99 MiB  49.19% 32.98 Mi    > gcr.io/k8s-minikube/kicbase...:  155.79 MiB / 287.99 MiB  54.10% 32.98 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 35.19 Mi    > gcr.io/k8s-minikube/kicbase...:  171.80 MiB / 287.99 MiB  59.66% 35.19 Mi    > gcr.io/k8s-minikube/kicbase...:  179.73 MiB / 287.99 MiB  62.41% 35.19 Mi    > gcr.io/k8s-minikube/kicbase...:  193.93 MiB / 287.99 MiB  67.34% 35.31 Mi    > gcr.io/k8s-minikube/kicbase...:  207.30 MiB / 287.99 MiB  71.98% 35.31 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 35.31 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 35.58 Mi    > gcr.io/k8s-minikube/kicbase...:  225.68 MiB / 287.99 MiB  78.36% 35.58 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 35.58 Mi    > gcr.io/k8s-minikube/kicbase...:  239.23 MiB / 287.99 MiB  83.07% 35.61 Mi    > gcr.io/k8s-minikube/kicbase...:  254.06 MiB / 287.99 MiB  88.
22% 35.61 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 35.61 Mi    > gcr.io/k8s-minikube/kicbase...:  265.13 MiB / 287.99 MiB  92.06% 36.09 Mi    > gcr.io/k8s-minikube/kicbase...:  278.48 MiB / 287.99 MiB  96.70% 36.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 36.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 36.22 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 36.22 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 36.22 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 33.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 33.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 33.88 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 31.70 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 31.70 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB
100.00% 33.44 MI0717 21:40:40.200525 1245884 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0717 21:40:40.200561 1245884 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0717 21:40:41.583510 1245884 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0717 21:40:41.583540 1245884 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:40:41.583590 1245884 start.go:365] acquiring machines lock for missing-upgrade-886828: {Name:mk2881696cc57c5b788c123e46adbf454c1428e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:40:41.583663 1245884 start.go:369] acquired machines lock for "missing-upgrade-886828" in 45.448µs
	I0717 21:40:41.583687 1245884 start.go:96] Skipping create...Using existing machine configuration
	I0717 21:40:41.583692 1245884 fix.go:54] fixHost starting: 
	I0717 21:40:41.583963 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:41.644833 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:41.644913 1245884 fix.go:102] recreateIfNeeded on missing-upgrade-886828: state= err=unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:41.644936 1245884 fix.go:107] machineExists: false. err=machine does not exist
	I0717 21:40:41.653529 1245884 out.go:177] * docker "missing-upgrade-886828" container is missing, will recreate.
	I0717 21:40:41.689030 1245884 delete.go:124] DEMOLISHING missing-upgrade-886828 ...
	I0717 21:40:41.689147 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:41.706675 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	W0717 21:40:41.706742 1245884 stop.go:75] unable to get state: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:41.706762 1245884 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:41.707222 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:41.730621 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:41.730688 1245884 delete.go:82] Unable to get host status for missing-upgrade-886828, assuming it has already been deleted: state: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:41.730772 1245884 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-886828
	W0717 21:40:41.752773 1245884 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-886828 returned with exit code 1
	I0717 21:40:41.752810 1245884 kic.go:367] could not find the container missing-upgrade-886828 to remove it. will try anyways
	I0717 21:40:41.752893 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:41.781093 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	W0717 21:40:41.781149 1245884 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:41.781232 1245884 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-886828 /bin/bash -c "sudo init 0"
	W0717 21:40:41.800716 1245884 cli_runner.go:211] docker exec --privileged -t missing-upgrade-886828 /bin/bash -c "sudo init 0" returned with exit code 1
	I0717 21:40:41.800746 1245884 oci.go:647] error shutdown missing-upgrade-886828: docker exec --privileged -t missing-upgrade-886828 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:42.800950 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:42.827397 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:42.827459 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:42.827472 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:42.827499 1245884 retry.go:31] will retry after 410.972471ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:43.239299 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:43.261963 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:43.262020 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:43.262032 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:43.262064 1245884 retry.go:31] will retry after 812.163485ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:44.074664 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:44.101627 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:44.101693 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:44.101707 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:44.101729 1245884 retry.go:31] will retry after 1.505985203s: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:45.607955 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:45.625662 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:45.625729 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:45.625743 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:45.625768 1245884 retry.go:31] will retry after 1.671050519s: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:47.297290 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:47.316911 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:47.316967 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:47.316978 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:47.317000 1245884 retry.go:31] will retry after 1.660340614s: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:48.977531 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:49.004496 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:49.004562 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:49.004578 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:49.004604 1245884 retry.go:31] will retry after 2.762227927s: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:51.767039 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:51.800573 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:40:51.801080 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:51.801100 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:40:51.801142 1245884 retry.go:31] will retry after 8.16861635s: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:40:59.973349 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:40:59.999971 1245884 cli_runner.go:211] docker container inspect missing-upgrade-886828 --format={{.State.Status}} returned with exit code 1
	I0717 21:41:00.000035 1245884 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	I0717 21:41:00.000052 1245884 oci.go:661] temporary error: container missing-upgrade-886828 status is  but expect it to be exited
	I0717 21:41:00.000084 1245884 oci.go:88] couldn't shut down missing-upgrade-886828 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-886828": docker container inspect missing-upgrade-886828 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-886828
	 
	I0717 21:41:00.000153 1245884 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-886828
	I0717 21:41:00.046783 1245884 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-886828
	W0717 21:41:00.083140 1245884 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-886828 returned with exit code 1
	I0717 21:41:00.083244 1245884 cli_runner.go:164] Run: docker network inspect missing-upgrade-886828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:41:00.120538 1245884 cli_runner.go:164] Run: docker network rm missing-upgrade-886828
	I0717 21:41:00.266480 1245884 fix.go:114] Sleeping 1 second for extra luck!
	I0717 21:41:01.266678 1245884 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:41:01.268512 1245884 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 21:41:01.268663 1245884 start.go:159] libmachine.API.Create for "missing-upgrade-886828" (driver="docker")
	I0717 21:41:01.268690 1245884 client.go:168] LocalClient.Create starting
	I0717 21:41:01.268768 1245884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem
	I0717 21:41:01.268809 1245884 main.go:141] libmachine: Decoding PEM data...
	I0717 21:41:01.268829 1245884 main.go:141] libmachine: Parsing certificate...
	I0717 21:41:01.268894 1245884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem
	I0717 21:41:01.268919 1245884 main.go:141] libmachine: Decoding PEM data...
	I0717 21:41:01.268933 1245884 main.go:141] libmachine: Parsing certificate...
	I0717 21:41:01.269219 1245884 cli_runner.go:164] Run: docker network inspect missing-upgrade-886828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:41:01.294262 1245884 cli_runner.go:211] docker network inspect missing-upgrade-886828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:41:01.294345 1245884 network_create.go:281] running [docker network inspect missing-upgrade-886828] to gather additional debugging logs...
	I0717 21:41:01.294367 1245884 cli_runner.go:164] Run: docker network inspect missing-upgrade-886828
	W0717 21:41:01.322752 1245884 cli_runner.go:211] docker network inspect missing-upgrade-886828 returned with exit code 1
	I0717 21:41:01.322785 1245884 network_create.go:284] error running [docker network inspect missing-upgrade-886828]: docker network inspect missing-upgrade-886828: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-886828 not found
	I0717 21:41:01.322798 1245884 network_create.go:286] output of [docker network inspect missing-upgrade-886828]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-886828 not found
	
	** /stderr **
	I0717 21:41:01.322861 1245884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:41:01.355911 1245884 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-28f030d3740c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:10:e7:a1:da} reservation:<nil>}
	I0717 21:41:01.356398 1245884 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c64db558fd38 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a6:03:82:36} reservation:<nil>}
	I0717 21:41:01.356762 1245884 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9ecea38d8cf1 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e1:b8:17:42} reservation:<nil>}
	I0717 21:41:01.357289 1245884 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40015c8d00}
	I0717 21:41:01.357325 1245884 network_create.go:123] attempt to create docker network missing-upgrade-886828 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0717 21:41:01.357394 1245884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-886828 missing-upgrade-886828
	I0717 21:41:01.456211 1245884 network_create.go:107] docker network missing-upgrade-886828 192.168.76.0/24 created
	I0717 21:41:01.456245 1245884 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-886828" container
	I0717 21:41:01.456319 1245884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:41:01.503779 1245884 cli_runner.go:164] Run: docker volume create missing-upgrade-886828 --label name.minikube.sigs.k8s.io=missing-upgrade-886828 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:41:01.527626 1245884 oci.go:103] Successfully created a docker volume missing-upgrade-886828
	I0717 21:41:01.527721 1245884 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-886828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-886828 --entrypoint /usr/bin/test -v missing-upgrade-886828:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0717 21:41:03.113805 1245884 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-886828-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-886828 --entrypoint /usr/bin/test -v missing-upgrade-886828:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.586043033s)
	I0717 21:41:03.113836 1245884 oci.go:107] Successfully prepared a docker volume missing-upgrade-886828
	I0717 21:41:03.113853 1245884 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0717 21:41:03.114006 1245884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:41:03.114115 1245884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:41:03.183689 1245884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-886828 --name missing-upgrade-886828 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-886828 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-886828 --network missing-upgrade-886828 --ip 192.168.76.2 --volume missing-upgrade-886828:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0717 21:41:03.546603 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Running}}
	I0717 21:41:03.571976 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	I0717 21:41:03.597988 1245884 cli_runner.go:164] Run: docker exec missing-upgrade-886828 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:41:03.666229 1245884 oci.go:144] the created container "missing-upgrade-886828" has a running status.
	I0717 21:41:03.666256 1245884 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa...
	I0717 21:41:04.110381 1245884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:41:04.142553 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	I0717 21:41:04.161794 1245884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:41:04.161816 1245884 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-886828 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:41:04.240334 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	I0717 21:41:04.261370 1245884 machine.go:88] provisioning docker machine ...
	I0717 21:41:04.261403 1245884 ubuntu.go:169] provisioning hostname "missing-upgrade-886828"
	I0717 21:41:04.261473 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:04.280583 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:04.281050 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:04.281073 1245884 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-886828 && echo "missing-upgrade-886828" | sudo tee /etc/hostname
	I0717 21:41:04.441231 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-886828
	
	I0717 21:41:04.441382 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:04.468660 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:04.469089 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:04.469107 1245884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-886828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-886828/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-886828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:41:04.618546 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:41:04.618611 1245884 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:41:04.618646 1245884 ubuntu.go:177] setting up certificates
	I0717 21:41:04.618665 1245884 provision.go:83] configureAuth start
	I0717 21:41:04.618751 1245884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-886828
	I0717 21:41:04.646587 1245884 provision.go:138] copyHostCerts
	I0717 21:41:04.646655 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:41:04.646669 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:41:04.646746 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:41:04.646839 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:41:04.646851 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:41:04.646878 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:41:04.646934 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:41:04.646942 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:41:04.646966 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:41:04.647012 1245884 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-886828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-886828]
	I0717 21:41:05.007427 1245884 provision.go:172] copyRemoteCerts
	I0717 21:41:05.007512 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:41:05.007563 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:05.035334 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:05.134925 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:41:05.160444 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 21:41:05.185835 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:41:05.208529 1245884 provision.go:86] duration metric: configureAuth took 589.82691ms
	I0717 21:41:05.208570 1245884 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:41:05.208750 1245884 config.go:182] Loaded profile config "missing-upgrade-886828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:41:05.208862 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:05.227387 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:05.227822 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:05.227846 1245884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:41:05.635858 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:41:05.635887 1245884 machine.go:91] provisioned docker machine in 1.374495197s
	I0717 21:41:05.635896 1245884 client.go:171] LocalClient.Create took 4.367197461s
	I0717 21:41:05.635909 1245884 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-886828" took 4.367246036s
	I0717 21:41:05.635918 1245884 start.go:300] post-start starting for "missing-upgrade-886828" (driver="docker")
	I0717 21:41:05.635927 1245884 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:41:05.635995 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:41:05.636039 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:05.654440 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:05.754505 1245884 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:41:05.758586 1245884 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:41:05.758610 1245884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:41:05.758621 1245884 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:41:05.758627 1245884 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0717 21:41:05.758637 1245884 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:41:05.758697 1245884 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:41:05.758786 1245884 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:41:05.758885 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:41:05.767647 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:41:05.791029 1245884 start.go:303] post-start completed in 155.095593ms
	I0717 21:41:05.791435 1245884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-886828
	I0717 21:41:05.809604 1245884 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/missing-upgrade-886828/config.json ...
	I0717 21:41:05.809915 1245884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:41:05.809967 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:05.827739 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:05.924559 1245884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:41:05.930236 1245884 start.go:128] duration metric: createHost completed in 4.663516055s
	I0717 21:41:05.930333 1245884 cli_runner.go:164] Run: docker container inspect missing-upgrade-886828 --format={{.State.Status}}
	W0717 21:41:05.947795 1245884 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 21:41:05.947824 1245884 machine.go:88] provisioning docker machine ...
	I0717 21:41:05.947841 1245884 ubuntu.go:169] provisioning hostname "missing-upgrade-886828"
	I0717 21:41:05.947905 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:05.968176 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:05.968620 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:05.968639 1245884 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-886828 && echo "missing-upgrade-886828" | sudo tee /etc/hostname
	I0717 21:41:06.122371 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-886828
	
	I0717 21:41:06.122485 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:06.146569 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:06.147047 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:06.147072 1245884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-886828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-886828/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-886828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:41:06.286145 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:41:06.286233 1245884 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:41:06.286266 1245884 ubuntu.go:177] setting up certificates
	I0717 21:41:06.286287 1245884 provision.go:83] configureAuth start
	I0717 21:41:06.286362 1245884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-886828
	I0717 21:41:06.304403 1245884 provision.go:138] copyHostCerts
	I0717 21:41:06.304466 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:41:06.304474 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:41:06.304549 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:41:06.304638 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:41:06.304643 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:41:06.304671 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:41:06.304721 1245884 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:41:06.304726 1245884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:41:06.304747 1245884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:41:06.304791 1245884 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-886828 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-886828]
	I0717 21:41:06.766635 1245884 provision.go:172] copyRemoteCerts
	I0717 21:41:06.766710 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:41:06.766756 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:06.787548 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:06.886628 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:41:06.910390 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 21:41:06.933808 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:41:06.957317 1245884 provision.go:86] duration metric: configureAuth took 671.006664ms
	I0717 21:41:06.957343 1245884 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:41:06.957521 1245884 config.go:182] Loaded profile config "missing-upgrade-886828": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:41:06.957619 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:06.976658 1245884 main.go:141] libmachine: Using SSH client type: native
	I0717 21:41:06.977098 1245884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34200 <nil> <nil>}
	I0717 21:41:06.977120 1245884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:41:07.299447 1245884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:41:07.299469 1245884 machine.go:91] provisioned docker machine in 1.351637871s
	I0717 21:41:07.299480 1245884 start.go:300] post-start starting for "missing-upgrade-886828" (driver="docker")
	I0717 21:41:07.299490 1245884 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:41:07.299556 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:41:07.299602 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:07.318242 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:07.418579 1245884 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:41:07.422646 1245884 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:41:07.422674 1245884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:41:07.422685 1245884 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:41:07.422691 1245884 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0717 21:41:07.422701 1245884 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:41:07.422761 1245884 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:41:07.422847 1245884 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:41:07.422954 1245884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:41:07.431760 1245884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:41:07.453951 1245884 start.go:303] post-start completed in 154.45548ms
	I0717 21:41:07.454030 1245884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:41:07.454083 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:07.472045 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:07.567366 1245884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:41:07.573259 1245884 fix.go:56] fixHost completed within 25.989556549s
	I0717 21:41:07.573281 1245884 start.go:83] releasing machines lock for "missing-upgrade-886828", held for 25.989605788s
	I0717 21:41:07.573367 1245884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-886828
	I0717 21:41:07.591540 1245884 ssh_runner.go:195] Run: cat /version.json
	I0717 21:41:07.591591 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:07.591840 1245884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:41:07.591896 1245884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-886828
	I0717 21:41:07.610459 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	I0717 21:41:07.621607 1245884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34200 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/missing-upgrade-886828/id_rsa Username:docker}
	W0717 21:41:07.705523 1245884 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 21:41:07.705656 1245884 ssh_runner.go:195] Run: systemctl --version
	I0717 21:41:07.837480 1245884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:41:07.919676 1245884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:41:07.925627 1245884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:41:07.949246 1245884 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:41:07.949324 1245884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:41:07.983569 1245884 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:41:07.983637 1245884 start.go:469] detecting cgroup driver to use...
	I0717 21:41:07.983679 1245884 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:41:07.983775 1245884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:41:08.012581 1245884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:41:08.025334 1245884 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:41:08.025452 1245884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:41:08.038105 1245884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:41:08.050935 1245884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 21:41:08.064131 1245884 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 21:41:08.064218 1245884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:41:08.183111 1245884 docker.go:212] disabling docker service ...
	I0717 21:41:08.183207 1245884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:41:08.197674 1245884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:41:08.210285 1245884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:41:08.316937 1245884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:41:08.426772 1245884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:41:08.438728 1245884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:41:08.455381 1245884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 21:41:08.455445 1245884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:41:08.467731 1245884 out.go:177] 
	W0717 21:41:08.469724 1245884 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 21:41:08.469746 1245884 out.go:239] * 
	* 
	W0717 21:41:08.470909 1245884 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 21:41:08.472515 1245884 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-886828 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-17 21:41:08.517398302 +0000 UTC m=+2290.635695790
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-886828
helpers_test.go:235: (dbg) docker inspect missing-upgrade-886828:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc",
	        "Created": "2023-07-17T21:41:03.201125856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1247967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:41:03.536642533Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc/hosts",
	        "LogPath": "/var/lib/docker/containers/0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc/0c1ec7b01a258e4a5760442c1e981ac54ff72837c27167ba6555ef3cfd66e1dc-json.log",
	        "Name": "/missing-upgrade-886828",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-886828:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-886828",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfbc75cd5b2ccef717220b5e8cbafb2e76064498db593dd421dd19fa16b2619d-init/diff:/var/lib/docker/overlay2/91d35f45621d5d2dd44a0b45120a4090c5cf7778cba800ab447b251d1fccc8e8/diff:/var/lib/docker/overlay2/03ac8c4f9d92942af8b1c1de89006735cca4b3c9c0c9ea1bc83e1cd1fe182de9/diff:/var/lib/docker/overlay2/8297d472d0090845e6c2d1304c23d280e8346c3a5f39cf8edd482faf156fda00/diff:/var/lib/docker/overlay2/9a8b526a3ab6723c040befedb464f7f0b3433013c39a36a2d8caa9629034a950/diff:/var/lib/docker/overlay2/3c47ac2acb07d429bc70f26e468bbec9010bd84ebdb3cbd040b7ed91187e6aeb/diff:/var/lib/docker/overlay2/6c85ad2dbd769ea018b63a325a62a732227cf3f8dfafb8b9feee304e9470d9a5/diff:/var/lib/docker/overlay2/3e8ffb4a31f1e7b9a2618b408434846bfd6aa362ace569a7b3898851d4b20612/diff:/var/lib/docker/overlay2/b3650f93cfc75a97e53ce9f882e35092688e4195c923e4b2b9bd9e07b37db587/diff:/var/lib/docker/overlay2/01ab308c91367a4a0c6081df88a1a8ff701afe4d47b744394a0fd88a004eb23c/diff:/var/lib/docker/overlay2/298fa5
bdefc035f03069ac9140d5b5ac87f2b0c34dbcdd05d6a889490b493be5/diff:/var/lib/docker/overlay2/999867df6c4a118569581e66f79ea5de5a2353df5161d27829f25cbbd70d645d/diff:/var/lib/docker/overlay2/9fa94607f8fcb85ea0cb52039c8ba7cf6b65343f407541d908520aa600779f82/diff:/var/lib/docker/overlay2/56534230fdb04c5caacbf7cde3a86a75668bef5d914d6b5ffe39d2d4a397b7ce/diff:/var/lib/docker/overlay2/fe3ffe3e67da0c144d3349769243a547ece7678b211a8a463d3f4d45bc6ede81/diff:/var/lib/docker/overlay2/5732ec54541404bb073265f0229b2326e3a039bf5c7c9d13def4de1a5b84fadf/diff:/var/lib/docker/overlay2/56e7e55d2b74c42712b15aded09f676950b92b269cc326bd5f9f4ca40418d001/diff:/var/lib/docker/overlay2/caf9d841e0bb9c29a3a9970ce96f012d72cebe7ec7778b482fd7541794a274c2/diff:/var/lib/docker/overlay2/44cc670a62972dc69eff46fb5108cb23d1b93ba7423d811c2e9bad7b939b5ed0/diff:/var/lib/docker/overlay2/fe203ff49e2ebbd985d684f1c26502f50e7bcab8d8c84aadcb0e8eaf6eac4b91/diff:/var/lib/docker/overlay2/8f458612b4f51532d95f449276b7f27bc021a198ee4e92cddaa75c38a5c06df3/diff:/var/lib/d
ocker/overlay2/7cb7a2407922431574220d9e76f53691b88e4b3159f7f57817b048cfad1d8429/diff:/var/lib/docker/overlay2/11c6c370261d84a2c29b7ab1d06fa3ccf14c6341584f95a3d1c7ce680a0be572/diff:/var/lib/docker/overlay2/007df303c3aa9bd5c93929a39eb468e01ce5f76c4e07810c7a10ca85cd68b19b/diff:/var/lib/docker/overlay2/2d57688b976d6cfe79c541cc53b4f3a17b3c32789c4d9a0f59255a1e22831515/diff:/var/lib/docker/overlay2/80d13ad1063d5a4996a86b28866d13385c13933da2611383f2a757f91c57eae5/diff:/var/lib/docker/overlay2/5817d18d85463f86d9c0ccda927f27e5c0820e327f52491ba8b112b8aa7dae00/diff:/var/lib/docker/overlay2/4cd8dbb7b18acbc6f0d766cd51feca4e97a8a0b85cfea252af6afda6d95dd529/diff:/var/lib/docker/overlay2/962f10d97cdc090a6c25f15f5300fe3c50c49d5760fd58883fd47c071d6ea81d/diff:/var/lib/docker/overlay2/20a271029a3d980884a18ec45c3f66686090565c3690528bf35315f91feafe8f/diff:/var/lib/docker/overlay2/0a009456a7aa88a308139e8ca76c5ca6d20e2b740c172981ab23196b98cbe2b2/diff:/var/lib/docker/overlay2/cb61236b88cf9a800ea37c09b516cec012799a9f9407c856b8cbd585479
44f52/diff:/var/lib/docker/overlay2/cf486e6fc652434ae97fb53d173e8454d6b5cfbfcc6c43209f8d495cf256cce6/diff:/var/lib/docker/overlay2/aa76f2ab87c6539dcaefddd39f6e0887eb06a429c56a85f73163aa64dc6ed3b9/diff:/var/lib/docker/overlay2/93b75a2cd9f7f5595d53cea92230ccbd79506b5125f425b7491ae0c3bb13772e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfbc75cd5b2ccef717220b5e8cbafb2e76064498db593dd421dd19fa16b2619d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfbc75cd5b2ccef717220b5e8cbafb2e76064498db593dd421dd19fa16b2619d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfbc75cd5b2ccef717220b5e8cbafb2e76064498db593dd421dd19fa16b2619d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-886828",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-886828/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-886828",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-886828",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-886828",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c628787b1474409806d48eb8bbc1878ef3630e00c24ea84b031e74699f8cdd03",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34200"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34199"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34198"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c628787b1474",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-886828": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0c1ec7b01a25",
	                        "missing-upgrade-886828"
	                    ],
	                    "NetworkID": "865199bec65b71ccba48d723814c0b9e7b05ff3a57973a53df8d4efd9b8cf85f",
	                    "EndpointID": "0845f57a1c7f0c1c8f2e3db3d81c877fa4c23c64818fe4ba366f4c8327cef8c8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-886828 -n missing-upgrade-886828
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-886828 -n missing-upgrade-886828: exit status 6 (323.034638ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 21:41:08.846570 1248953 status.go:415] kubeconfig endpoint: got: 192.168.59.45:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-886828" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-886828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-886828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-886828: (1.884426391s)
--- FAIL: TestMissingContainerUpgrade (179.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (69.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3058554300.exe start -p stopped-upgrade-189380 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0717 21:41:23.384380 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3058554300.exe start -p stopped-upgrade-189380 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.269850901s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3058554300.exe -p stopped-upgrade-189380 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3058554300.exe -p stopped-upgrade-189380 stop: (2.110959304s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-189380 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-189380 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.381127515s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-189380] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-189380 in cluster stopped-upgrade-189380
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-189380" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:42:15.332190 1252755 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:42:15.332358 1252755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:42:15.332366 1252755 out.go:309] Setting ErrFile to fd 2...
	I0717 21:42:15.332372 1252755 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:42:15.332623 1252755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:42:15.332976 1252755 out.go:303] Setting JSON to false
	I0717 21:42:15.334039 1252755 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23079,"bootTime":1689607057,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:42:15.334107 1252755 start.go:138] virtualization:  
	I0717 21:42:15.336588 1252755 out.go:177] * [stopped-upgrade-189380] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:42:15.339386 1252755 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:42:15.340880 1252755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:42:15.339485 1252755 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0717 21:42:15.339525 1252755 notify.go:220] Checking for updates...
	I0717 21:42:15.344870 1252755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:42:15.346703 1252755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:42:15.348101 1252755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:42:15.349525 1252755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:42:15.351885 1252755 config.go:182] Loaded profile config "stopped-upgrade-189380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:42:15.360984 1252755 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 21:42:15.362548 1252755 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:42:15.405340 1252755 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:42:15.405440 1252755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:42:15.503648 1252755 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0717 21:42:15.509947 1252755 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 21:42:15.49668931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:42:15.510051 1252755 docker.go:294] overlay module found
	I0717 21:42:15.512641 1252755 out.go:177] * Using the docker driver based on existing profile
	I0717 21:42:15.514218 1252755 start.go:298] selected driver: docker
	I0717 21:42:15.514244 1252755 start.go:880] validating driver "docker" against &{Name:stopped-upgrade-189380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-189380 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.197 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:42:15.514350 1252755 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:42:15.514967 1252755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:42:15.602530 1252755 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 21:42:15.592487973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:42:15.602849 1252755 cni.go:84] Creating CNI manager for ""
	I0717 21:42:15.602866 1252755 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:42:15.602880 1252755 start_flags.go:319] config:
	{Name:stopped-upgrade-189380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-189380 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.197 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:42:15.605386 1252755 out.go:177] * Starting control plane node stopped-upgrade-189380 in cluster stopped-upgrade-189380
	I0717 21:42:15.606836 1252755 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:42:15.608703 1252755 out.go:177] * Pulling base image ...
	I0717 21:42:15.610725 1252755 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0717 21:42:15.610774 1252755 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0717 21:42:15.628621 1252755 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0717 21:42:15.628645 1252755 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0717 21:42:15.686151 1252755 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0717 21:42:15.686332 1252755 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/stopped-upgrade-189380/config.json ...
	I0717 21:42:15.686407 1252755 cache.go:107] acquiring lock: {Name:mkedba646b95d771e43740702c8fb9cd60a42c79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686490 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 21:42:15.686499 1252755 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.569µs
	I0717 21:42:15.686510 1252755 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 21:42:15.686518 1252755 cache.go:107] acquiring lock: {Name:mk93d25201f6f7bd6c0d281c5a805fa55d5e1773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686551 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0717 21:42:15.686555 1252755 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.819µs
	I0717 21:42:15.686562 1252755 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0717 21:42:15.686569 1252755 cache.go:107] acquiring lock: {Name:mk98a8a10f9c96fe8cdb414f2ed4a9bf898bf68d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686595 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0717 21:42:15.686599 1252755 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.581µs
	I0717 21:42:15.686606 1252755 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:42:15.686614 1252755 cache.go:107] acquiring lock: {Name:mk917f050c6f741aca6c74294dfa2e6d6cde4e05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686643 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0717 21:42:15.686641 1252755 start.go:365] acquiring machines lock for stopped-upgrade-189380: {Name:mkb9ad50c7d6c225b92f0fb189df15c0d11fe527 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686648 1252755 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.151µs
	I0717 21:42:15.686655 1252755 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0717 21:42:15.686662 1252755 cache.go:107] acquiring lock: {Name:mk81e0959e4c735549d416119f34c7e5992cad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686686 1252755 start.go:369] acquired machines lock for "stopped-upgrade-189380" in 26.093µs
	I0717 21:42:15.686693 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0717 21:42:15.686700 1252755 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 37.407µs
	I0717 21:42:15.686703 1252755 start.go:96] Skipping create...Using existing machine configuration
	I0717 21:42:15.686711 1252755 fix.go:54] fixHost starting: 
	I0717 21:42:15.686714 1252755 cache.go:107] acquiring lock: {Name:mk7c264ab4e632424507af2b6bc961f4dd7ebce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686752 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0717 21:42:15.686756 1252755 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 42.921µs
	I0717 21:42:15.686762 1252755 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0717 21:42:15.686771 1252755 cache.go:107] acquiring lock: {Name:mk66a448c5af4ad05b35b01bf89a6aec30c39cdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686794 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0717 21:42:15.686799 1252755 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.194µs
	I0717 21:42:15.686805 1252755 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0717 21:42:15.686813 1252755 cache.go:107] acquiring lock: {Name:mk5ff5a548a20c6c4daaa89362bbf23fed93cfc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:42:15.686844 1252755 cache.go:115] /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0717 21:42:15.686849 1252755 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 36.726µs
	I0717 21:42:15.686855 1252755 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0717 21:42:15.686608 1252755 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0717 21:42:15.686706 1252755 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0717 21:42:15.686951 1252755 cache.go:87] Successfully saved all images to host disk.
	I0717 21:42:15.686974 1252755 cli_runner.go:164] Run: docker container inspect stopped-upgrade-189380 --format={{.State.Status}}
	I0717 21:42:15.704370 1252755 fix.go:102] recreateIfNeeded on stopped-upgrade-189380: state=Stopped err=<nil>
	W0717 21:42:15.704413 1252755 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 21:42:15.707390 1252755 out.go:177] * Restarting existing docker container for "stopped-upgrade-189380" ...
	I0717 21:42:15.708906 1252755 cli_runner.go:164] Run: docker start stopped-upgrade-189380
	I0717 21:42:16.067808 1252755 cli_runner.go:164] Run: docker container inspect stopped-upgrade-189380 --format={{.State.Status}}
	I0717 21:42:16.098504 1252755 kic.go:426] container "stopped-upgrade-189380" state is running.
	I0717 21:42:16.098962 1252755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-189380
	I0717 21:42:16.126125 1252755 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/stopped-upgrade-189380/config.json ...
	I0717 21:42:16.126372 1252755 machine.go:88] provisioning docker machine ...
	I0717 21:42:16.126396 1252755 ubuntu.go:169] provisioning hostname "stopped-upgrade-189380"
	I0717 21:42:16.126455 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:16.155046 1252755 main.go:141] libmachine: Using SSH client type: native
	I0717 21:42:16.155613 1252755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34208 <nil> <nil>}
	I0717 21:42:16.155633 1252755 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-189380 && echo "stopped-upgrade-189380" | sudo tee /etc/hostname
	I0717 21:42:16.156326 1252755 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 21:42:19.330911 1252755 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-189380
	
	I0717 21:42:19.331083 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:19.358633 1252755 main.go:141] libmachine: Using SSH client type: native
	I0717 21:42:19.359082 1252755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34208 <nil> <nil>}
	I0717 21:42:19.359104 1252755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-189380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-189380/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-189380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:42:19.510567 1252755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:42:19.510590 1252755 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-1130480/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-1130480/.minikube}
	I0717 21:42:19.510613 1252755 ubuntu.go:177] setting up certificates
	I0717 21:42:19.510621 1252755 provision.go:83] configureAuth start
	I0717 21:42:19.510685 1252755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-189380
	I0717 21:42:19.529149 1252755 provision.go:138] copyHostCerts
	I0717 21:42:19.529291 1252755 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem, removing ...
	I0717 21:42:19.529300 1252755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem
	I0717 21:42:19.529376 1252755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/ca.pem (1082 bytes)
	I0717 21:42:19.529478 1252755 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem, removing ...
	I0717 21:42:19.529483 1252755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem
	I0717 21:42:19.529509 1252755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/cert.pem (1123 bytes)
	I0717 21:42:19.529568 1252755 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem, removing ...
	I0717 21:42:19.529572 1252755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem
	I0717 21:42:19.529596 1252755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-1130480/.minikube/key.pem (1675 bytes)
	I0717 21:42:19.529651 1252755 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-189380 san=[192.168.59.197 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-189380]
	I0717 21:42:19.784101 1252755 provision.go:172] copyRemoteCerts
	I0717 21:42:19.784215 1252755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:42:19.784279 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:19.804079 1252755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34208 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/stopped-upgrade-189380/id_rsa Username:docker}
	I0717 21:42:19.902593 1252755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 21:42:19.926458 1252755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:42:19.950065 1252755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 21:42:19.973127 1252755 provision.go:86] duration metric: configureAuth took 462.492713ms
	I0717 21:42:19.973286 1252755 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:42:19.973490 1252755 config.go:182] Loaded profile config "stopped-upgrade-189380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0717 21:42:19.973608 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:19.995695 1252755 main.go:141] libmachine: Using SSH client type: native
	I0717 21:42:19.996148 1252755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 34208 <nil> <nil>}
	I0717 21:42:19.996170 1252755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:42:20.408918 1252755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:42:20.408963 1252755 machine.go:91] provisioned docker machine in 4.282573251s
	I0717 21:42:20.408979 1252755 start.go:300] post-start starting for "stopped-upgrade-189380" (driver="docker")
	I0717 21:42:20.408992 1252755 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:42:20.409079 1252755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:42:20.409133 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:20.439022 1252755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34208 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/stopped-upgrade-189380/id_rsa Username:docker}
	I0717 21:42:20.538620 1252755 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:42:20.542546 1252755 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:42:20.542574 1252755 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:42:20.542585 1252755 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:42:20.542591 1252755 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0717 21:42:20.542620 1252755 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/addons for local assets ...
	I0717 21:42:20.542688 1252755 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-1130480/.minikube/files for local assets ...
	I0717 21:42:20.542770 1252755 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem -> 11358722.pem in /etc/ssl/certs
	I0717 21:42:20.542875 1252755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 21:42:20.551616 1252755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/ssl/certs/11358722.pem --> /etc/ssl/certs/11358722.pem (1708 bytes)
	I0717 21:42:20.574884 1252755 start.go:303] post-start completed in 165.87483ms
	I0717 21:42:20.575009 1252755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:42:20.575073 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:20.592848 1252755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34208 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/stopped-upgrade-189380/id_rsa Username:docker}
	I0717 21:42:20.690558 1252755 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:42:20.698730 1252755 fix.go:56] fixHost completed within 5.012011301s
	I0717 21:42:20.698751 1252755 start.go:83] releasing machines lock for "stopped-upgrade-189380", held for 5.01205752s
	I0717 21:42:20.698818 1252755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-189380
	I0717 21:42:20.716987 1252755 ssh_runner.go:195] Run: cat /version.json
	I0717 21:42:20.717039 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:20.717382 1252755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:42:20.717445 1252755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-189380
	I0717 21:42:20.741384 1252755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34208 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/stopped-upgrade-189380/id_rsa Username:docker}
	I0717 21:42:20.748830 1252755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34208 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/stopped-upgrade-189380/id_rsa Username:docker}
	W0717 21:42:20.911020 1252755 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 21:42:20.911645 1252755 ssh_runner.go:195] Run: systemctl --version
	I0717 21:42:20.917108 1252755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:42:21.094724 1252755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:42:21.101023 1252755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:42:21.126320 1252755 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:42:21.126417 1252755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:42:21.158775 1252755 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 21:42:21.158799 1252755 start.go:469] detecting cgroup driver to use...
	I0717 21:42:21.158831 1252755 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:42:21.158883 1252755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:42:21.187961 1252755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:42:21.200889 1252755 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:42:21.200950 1252755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:42:21.214731 1252755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:42:21.227345 1252755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 21:42:21.241082 1252755 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 21:42:21.241150 1252755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:42:21.349817 1252755 docker.go:212] disabling docker service ...
	I0717 21:42:21.349888 1252755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:42:21.364082 1252755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:42:21.377011 1252755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:42:21.482478 1252755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:42:21.601828 1252755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:42:21.613967 1252755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:42:21.632023 1252755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 21:42:21.632129 1252755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:42:21.646641 1252755 out.go:177] 
	W0717 21:42:21.648319 1252755 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 21:42:21.648341 1252755 out.go:239] * 
	* 
	W0717 21:42:21.650896 1252755 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 21:42:21.654997 1252755 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-189380 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (69.77s)

                                                
                                    

Test pass (268/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.61
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.27.3/json-events 14.23
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.22
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.59
22 TestAddons/Setup 155.14
24 TestAddons/parallel/Registry 16.56
26 TestAddons/parallel/InspektorGadget 11.14
27 TestAddons/parallel/MetricsServer 5.85
30 TestAddons/parallel/CSI 61.33
31 TestAddons/parallel/Headlamp 11.88
32 TestAddons/parallel/CloudSpanner 5.73
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.29
37 TestCertOptions 38.21
38 TestCertExpiration 244.74
40 TestForceSystemdFlag 44.08
41 TestForceSystemdEnv 44.58
48 TestErrorSpam/start 0.8
49 TestErrorSpam/status 1.09
50 TestErrorSpam/pause 1.89
51 TestErrorSpam/unpause 2.09
52 TestErrorSpam/stop 1.44
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 75.03
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 27.19
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.01
64 TestFunctional/serial/CacheCmd/cache/add_local 1.07
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
69 TestFunctional/serial/CacheCmd/cache/delete 0.12
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 35.6
73 TestFunctional/serial/ComponentHealth 0.14
74 TestFunctional/serial/LogsCmd 1.85
75 TestFunctional/serial/LogsFileCmd 1.91
76 TestFunctional/serial/InvalidService 4.67
78 TestFunctional/parallel/ConfigCmd 0.46
79 TestFunctional/parallel/DashboardCmd 9.42
80 TestFunctional/parallel/DryRun 0.48
81 TestFunctional/parallel/InternationalLanguage 0.22
82 TestFunctional/parallel/StatusCmd 1.18
86 TestFunctional/parallel/ServiceCmdConnect 9.76
87 TestFunctional/parallel/AddonsCmd 0.18
88 TestFunctional/parallel/PersistentVolumeClaim 27.06
90 TestFunctional/parallel/SSHCmd 0.76
91 TestFunctional/parallel/CpCmd 1.46
93 TestFunctional/parallel/FileSync 0.45
94 TestFunctional/parallel/CertSync 2.12
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
102 TestFunctional/parallel/License 0.53
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
114 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
116 TestFunctional/parallel/ServiceCmd/List 0.73
117 TestFunctional/parallel/ProfileCmd/profile_list 0.47
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.65
121 TestFunctional/parallel/MountCmd/any-port 8.36
122 TestFunctional/parallel/ServiceCmd/Format 0.58
123 TestFunctional/parallel/ServiceCmd/URL 0.55
124 TestFunctional/parallel/MountCmd/specific-port 2.87
125 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
126 TestFunctional/parallel/Version/short 0.08
127 TestFunctional/parallel/Version/components 0.8
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.99
133 TestFunctional/parallel/ImageCommands/Setup 2.76
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.02
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.3
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.7
144 TestFunctional/delete_addon-resizer_images 0.1
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 95
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.55
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
157 TestJSONOutput/start/Command 80.37
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.83
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.74
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.86
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 43.45
183 TestKicCustomNetwork/use_default_bridge_network 35.45
184 TestKicExistingNetwork 36.05
185 TestKicCustomSubnet 35.54
186 TestKicStaticIP 37.46
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 68.56
191 TestMountStart/serial/StartWithMountFirst 7.02
192 TestMountStart/serial/VerifyMountFirst 0.28
193 TestMountStart/serial/StartWithMountSecond 6.71
194 TestMountStart/serial/VerifyMountSecond 0.28
195 TestMountStart/serial/DeleteFirst 1.69
196 TestMountStart/serial/VerifyMountPostDelete 0.27
197 TestMountStart/serial/Stop 1.21
198 TestMountStart/serial/RestartStopped 8.87
199 TestMountStart/serial/VerifyMountPostStop 0.29
202 TestMultiNode/serial/FreshStart2Nodes 123.57
203 TestMultiNode/serial/DeployApp2Nodes 6.5
205 TestMultiNode/serial/AddNode 47.59
206 TestMultiNode/serial/ProfileList 0.35
207 TestMultiNode/serial/CopyFile 11.01
208 TestMultiNode/serial/StopNode 2.38
209 TestMultiNode/serial/StartAfterStop 12.38
210 TestMultiNode/serial/RestartKeepsNodes 122.21
211 TestMultiNode/serial/DeleteNode 5.06
212 TestMultiNode/serial/StopMultiNode 24.05
213 TestMultiNode/serial/RestartMultiNode 80.52
214 TestMultiNode/serial/ValidateNameConflict 31.8
219 TestPreload 145.79
221 TestScheduledStopUnix 108.91
224 TestInsufficientStorage 10.45
227 TestKubernetesUpgrade 401.39
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
231 TestNoKubernetes/serial/StartWithK8s 43.26
232 TestNoKubernetes/serial/StartWithStopK8s 10.38
233 TestNoKubernetes/serial/Start 9.74
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.41
235 TestNoKubernetes/serial/ProfileList 1.15
236 TestNoKubernetes/serial/Stop 1.33
237 TestNoKubernetes/serial/StartNoArgs 7.66
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.51
239 TestStoppedBinaryUpgrade/Setup 1.15
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
250 TestPause/serial/Start 77.83
251 TestPause/serial/SecondStartNoReconfiguration 42.13
252 TestPause/serial/Pause 1.47
253 TestPause/serial/VerifyStatus 0.63
254 TestPause/serial/Unpause 1.21
255 TestPause/serial/PauseAgain 1.64
256 TestPause/serial/DeletePaused 3.27
257 TestPause/serial/VerifyDeletedResources 12.89
265 TestNetworkPlugins/group/false 4.5
270 TestStartStop/group/old-k8s-version/serial/FirstStart 123.62
271 TestStartStop/group/old-k8s-version/serial/DeployApp 10.77
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
273 TestStartStop/group/old-k8s-version/serial/Stop 12.15
274 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
275 TestStartStop/group/old-k8s-version/serial/SecondStart 433.06
277 TestStartStop/group/no-preload/serial/FirstStart 69.6
278 TestStartStop/group/no-preload/serial/DeployApp 9.51
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
280 TestStartStop/group/no-preload/serial/Stop 12.11
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
282 TestStartStop/group/no-preload/serial/SecondStart 611.89
283 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
284 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
285 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.49
286 TestStartStop/group/old-k8s-version/serial/Pause 5.17
288 TestStartStop/group/embed-certs/serial/FirstStart 84.46
289 TestStartStop/group/embed-certs/serial/DeployApp 9.56
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
291 TestStartStop/group/embed-certs/serial/Stop 12.17
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
293 TestStartStop/group/embed-certs/serial/SecondStart 619.81
294 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
295 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
296 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.36
297 TestStartStop/group/no-preload/serial/Pause 3.49
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.97
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.31
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 603.21
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.39
308 TestStartStop/group/embed-certs/serial/Pause 3.42
310 TestStartStop/group/newest-cni/serial/FirstStart 43.65
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
313 TestStartStop/group/newest-cni/serial/Stop 1.25
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/newest-cni/serial/SecondStart 30.43
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
319 TestStartStop/group/newest-cni/serial/Pause 3.21
320 TestNetworkPlugins/group/auto/Start 50.43
321 TestNetworkPlugins/group/auto/KubeletFlags 0.32
322 TestNetworkPlugins/group/auto/NetCatPod 11.41
323 TestNetworkPlugins/group/auto/DNS 0.23
324 TestNetworkPlugins/group/auto/Localhost 0.19
325 TestNetworkPlugins/group/auto/HairPin 0.18
326 TestNetworkPlugins/group/kindnet/Start 78.06
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.51
331 TestNetworkPlugins/group/calico/Start 75.62
332 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
334 TestNetworkPlugins/group/kindnet/NetCatPod 13.68
335 TestNetworkPlugins/group/kindnet/DNS 0.29
336 TestNetworkPlugins/group/kindnet/Localhost 0.27
337 TestNetworkPlugins/group/kindnet/HairPin 0.24
338 TestNetworkPlugins/group/custom-flannel/Start 77.75
339 TestNetworkPlugins/group/calico/ControllerPod 5.04
340 TestNetworkPlugins/group/calico/KubeletFlags 0.42
341 TestNetworkPlugins/group/calico/NetCatPod 12.56
342 TestNetworkPlugins/group/calico/DNS 0.24
343 TestNetworkPlugins/group/calico/Localhost 0.24
344 TestNetworkPlugins/group/calico/HairPin 0.22
345 TestNetworkPlugins/group/enable-default-cni/Start 89.98
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.58
348 TestNetworkPlugins/group/custom-flannel/DNS 0.3
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
351 TestNetworkPlugins/group/flannel/Start 69.4
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.48
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.73
354 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
355 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
356 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
357 TestNetworkPlugins/group/flannel/ControllerPod 5.04
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
359 TestNetworkPlugins/group/flannel/NetCatPod 11.46
360 TestNetworkPlugins/group/bridge/Start 88.56
361 TestNetworkPlugins/group/flannel/DNS 0.23
362 TestNetworkPlugins/group/flannel/Localhost 0.18
363 TestNetworkPlugins/group/flannel/HairPin 0.19
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
365 TestNetworkPlugins/group/bridge/NetCatPod 11.36
366 TestNetworkPlugins/group/bridge/DNS 0.33
367 TestNetworkPlugins/group/bridge/Localhost 0.2
368 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (13.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-025848 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-025848 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.611688875s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-025848
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-025848: exit status 85 (76.618464ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-025848 | jenkins | v1.30.1 | 17 Jul 23 21:02 UTC |          |
	|         | -p download-only-025848        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:02:57
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:02:57.971774 1135877 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:02:57.972020 1135877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:02:57.972047 1135877 out.go:309] Setting ErrFile to fd 2...
	I0717 21:02:57.972065 1135877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:02:57.972402 1135877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	W0717 21:02:57.972567 1135877 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-1130480/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-1130480/.minikube/config/config.json: no such file or directory
	I0717 21:02:57.973060 1135877 out.go:303] Setting JSON to true
	I0717 21:02:57.974216 1135877 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20721,"bootTime":1689607057,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:02:57.974316 1135877 start.go:138] virtualization:  
	I0717 21:02:57.977493 1135877 out.go:97] [download-only-025848] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	W0717 21:02:57.977758 1135877 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 21:02:57.979724 1135877 out.go:169] MINIKUBE_LOCATION=16890
	I0717 21:02:57.977872 1135877 notify.go:220] Checking for updates...
	I0717 21:02:57.983311 1135877 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:02:57.985338 1135877 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:02:57.987675 1135877 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:02:57.989498 1135877 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 21:02:57.992568 1135877 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:02:57.992855 1135877 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:02:58.021757 1135877 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:02:58.021846 1135877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:02:58.113374 1135877 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-07-17 21:02:58.103507057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:02:58.113489 1135877 docker.go:294] overlay module found
	I0717 21:02:58.115427 1135877 out.go:97] Using the docker driver based on user configuration
	I0717 21:02:58.115453 1135877 start.go:298] selected driver: docker
	I0717 21:02:58.115461 1135877 start.go:880] validating driver "docker" against <nil>
	I0717 21:02:58.115560 1135877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:02:58.193799 1135877 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-07-17 21:02:58.183691836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:02:58.193968 1135877 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:02:58.194251 1135877 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 21:02:58.194405 1135877 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:02:58.196664 1135877 out.go:169] Using Docker driver with root privileges
	I0717 21:02:58.198751 1135877 cni.go:84] Creating CNI manager for ""
	I0717 21:02:58.198773 1135877 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:02:58.198784 1135877 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:02:58.198795 1135877 start_flags.go:319] config:
	{Name:download-only-025848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-025848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:02:58.200657 1135877 out.go:97] Starting control plane node download-only-025848 in cluster download-only-025848
	I0717 21:02:58.200702 1135877 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:02:58.202733 1135877 out.go:97] Pulling base image ...
	I0717 21:02:58.202772 1135877 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:02:58.202925 1135877 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:02:58.219552 1135877 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:02:58.220184 1135877 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:02:58.220302 1135877 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:02:58.272904 1135877 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0717 21:02:58.272930 1135877 cache.go:57] Caching tarball of preloaded images
	I0717 21:02:58.273550 1135877 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:02:58.275736 1135877 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 21:02:58.275759 1135877 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0717 21:02:58.401954 1135877 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0717 21:03:03.150106 1135877 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-025848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (14.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-025848 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-025848 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.226924815s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (14.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-025848
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-025848: exit status 85 (73.504471ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-025848 | jenkins | v1.30.1 | 17 Jul 23 21:02 UTC |          |
	|         | -p download-only-025848        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-025848 | jenkins | v1.30.1 | 17 Jul 23 21:03 UTC |          |
	|         | -p download-only-025848        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:03:11
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:03:11.669576 1135956 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:03:11.669717 1135956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:03:11.669726 1135956 out.go:309] Setting ErrFile to fd 2...
	I0717 21:03:11.669732 1135956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:03:11.670008 1135956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	W0717 21:03:11.670137 1135956 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-1130480/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-1130480/.minikube/config/config.json: no such file or directory
	I0717 21:03:11.670360 1135956 out.go:303] Setting JSON to true
	I0717 21:03:11.671418 1135956 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":20735,"bootTime":1689607057,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:03:11.671485 1135956 start.go:138] virtualization:  
	I0717 21:03:11.674870 1135956 out.go:97] [download-only-025848] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:03:11.678580 1135956 out.go:169] MINIKUBE_LOCATION=16890
	I0717 21:03:11.675230 1135956 notify.go:220] Checking for updates...
	I0717 21:03:11.683447 1135956 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:03:11.686808 1135956 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:03:11.690086 1135956 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:03:11.692498 1135956 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 21:03:11.698716 1135956 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:03:11.699268 1135956 config.go:182] Loaded profile config "download-only-025848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0717 21:03:11.699352 1135956 start.go:788] api.Load failed for download-only-025848: filestore "download-only-025848": Docker machine "download-only-025848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:03:11.699488 1135956 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 21:03:11.699514 1135956 start.go:788] api.Load failed for download-only-025848: filestore "download-only-025848": Docker machine "download-only-025848" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:03:11.723687 1135956 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:03:11.723785 1135956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:03:11.806172 1135956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:03:11.795912596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:03:11.806281 1135956 docker.go:294] overlay module found
	I0717 21:03:11.808382 1135956 out.go:97] Using the docker driver based on existing profile
	I0717 21:03:11.808409 1135956 start.go:298] selected driver: docker
	I0717 21:03:11.808433 1135956 start.go:880] validating driver "docker" against &{Name:download-only-025848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-025848 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0717 21:03:11.808607 1135956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:03:11.877135 1135956 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:03:11.867675893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:03:11.877628 1135956 cni.go:84] Creating CNI manager for ""
	I0717 21:03:11.877644 1135956 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:03:11.877653 1135956 start_flags.go:319] config:
	{Name:download-only-025848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-025848 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:03:11.880164 1135956 out.go:97] Starting control plane node download-only-025848 in cluster download-only-025848
	I0717 21:03:11.880191 1135956 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:03:11.882300 1135956 out.go:97] Pulling base image ...
	I0717 21:03:11.882332 1135956 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:03:11.882484 1135956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:03:11.899362 1135956 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:03:11.899481 1135956 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:03:11.899505 1135956 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 21:03:11.899511 1135956 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 21:03:11.899521 1135956 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:03:11.955060 1135956 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	I0717 21:03:11.955084 1135956 cache.go:57] Caching tarball of preloaded images
	I0717 21:03:11.955616 1135956 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:03:11.957860 1135956 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 21:03:11.957889 1135956 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4 ...
	I0717 21:03:12.063678 1135956 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:5385d65818d7d3a2749f9dcda9541749 -> /home/jenkins/minikube-integration/16890-1130480/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-025848"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-025848
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-287357 --alsologtostderr --binary-mirror http://127.0.0.1:46049 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-287357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-287357
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/Setup (155.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-966885 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-966885 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m35.13640303s)
--- PASS: TestAddons/Setup (155.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 52.573817ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pw2qn" [06dc9e5a-f654-41c5-be3e-33ed763b415d] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.012916544s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l2jb4" [64c0d8ab-3b0f-4220-aa8b-e6af17da8a29] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016291491s
addons_test.go:316: (dbg) Run:  kubectl --context addons-966885 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-966885 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-966885 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.35004262s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 ip
2023/07/17 21:06:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r6m4g" [4c5ae692-a3db-4007-aeed-8a81436779c6] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012700286s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-966885
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-966885: (6.12753637s)
--- PASS: TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 9.680787ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-7jvwv" [f07620db-0f6a-44f1-87ce-68016e67d4b0] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01542267s
addons_test.go:391: (dbg) Run:  kubectl --context addons-966885 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.202724ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-966885 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-966885 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3aae13d6-36bf-4ae2-bb8a-5a31b84fc518] Pending
helpers_test.go:344: "task-pv-pod" [3aae13d6-36bf-4ae2-bb8a-5a31b84fc518] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3aae13d6-36bf-4ae2-bb8a-5a31b84fc518] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.015454386s
addons_test.go:560: (dbg) Run:  kubectl --context addons-966885 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966885 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-966885 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-966885 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-966885 delete pod task-pv-pod: (1.012151393s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-966885 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-966885 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-966885 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-966885 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f30c4bd4-d9bb-4b9c-bf72-09085281592b] Pending
helpers_test.go:344: "task-pv-pod-restore" [f30c4bd4-d9bb-4b9c-bf72-09085281592b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f30c4bd4-d9bb-4b9c-bf72-09085281592b] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.020527516s
addons_test.go:602: (dbg) Run:  kubectl --context addons-966885 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-966885 delete pod task-pv-pod-restore: (1.066903149s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-966885 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-966885 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-966885 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.812835099s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-966885 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-966885 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-966885 --alsologtostderr -v=1: (1.841148949s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-xs69b" [25e423a1-81dc-4df9-8edc-5f9dc3fdfb1b] Pending
helpers_test.go:344: "headlamp-66f6498c69-xs69b" [25e423a1-81dc-4df9-8edc-5f9dc3fdfb1b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-xs69b" [25e423a1-81dc-4df9-8edc-5f9dc3fdfb1b] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.035904836s
--- PASS: TestAddons/parallel/Headlamp (11.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-lwls9" [baa22b3e-ff0c-441e-a0ab-ff2b7b392a4c] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013342381s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-966885
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-966885 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-966885 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-966885
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-966885: (12.019885254s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-966885
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-966885
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-966885
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestCertOptions (38.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-465054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-465054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.543703046s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-465054 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-465054 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-465054 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-465054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-465054
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-465054: (1.993406911s)
--- PASS: TestCertOptions (38.21s)

                                                
                                    
x
+
TestCertExpiration (244.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-731236 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-731236 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.280558657s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-731236 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-731236 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.803434449s)
helpers_test.go:175: Cleaning up "cert-expiration-731236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-731236
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-731236: (2.652436965s)
--- PASS: TestCertExpiration (244.74s)

                                                
                                    
x
+
TestForceSystemdFlag (44.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-501082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 21:46:02.754152 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-501082 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.05989499s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-501082 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-501082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-501082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-501082: (2.616001582s)
--- PASS: TestForceSystemdFlag (44.08s)

                                                
                                    
x
+
TestForceSystemdEnv (44.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-914430 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 21:46:23.384254 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-914430 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.96264917s)
helpers_test.go:175: Cleaning up "force-systemd-env-914430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-914430
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-914430: (2.620616999s)
--- PASS: TestForceSystemdEnv (44.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 unpause
--- PASS: TestErrorSpam/unpause (2.09s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 stop: (1.238134789s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-526468 --log_dir /tmp/nospam-526468 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16890-1130480/.minikube/files/etc/test/nested/copy/1135872/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0717 21:11:02.754582 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:02.760930 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:02.771175 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:02.791438 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:02.831681 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:02.911942 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:03.072318 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:03.392721 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:04.033322 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:05.313551 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:07.873787 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:12.994707 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:23.234887 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:11:43.715289 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-812870 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.026401654s)
--- PASS: TestFunctional/serial/StartWithProxy (75.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --alsologtostderr -v=8
E0717 21:12:24.675739 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-812870 --alsologtostderr -v=8: (27.186854254s)
functional_test.go:659: soft start took 27.187355931s for "functional-812870" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-812870 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:3.1: (1.409709847s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:3.3: (1.362213039s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 cache add registry.k8s.io/pause:latest: (1.240939058s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-812870 /tmp/TestFunctionalserialCacheCmdcacheadd_local3727176077/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache add minikube-local-cache-test:functional-812870
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache delete minikube-local-cache-test:functional-812870
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-812870
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (334.930666ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 cache reload: (1.101295062s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 kubectl -- --context functional-812870 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-812870 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-812870 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.599836922s)
functional_test.go:757: restart took 35.59993375s for "functional-812870" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-812870 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 logs: (1.851507686s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 logs --file /tmp/TestFunctionalserialLogsFileCmd155586142/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 logs --file /tmp/TestFunctionalserialLogsFileCmd155586142/001/logs.txt: (1.910149187s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-812870 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-812870
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-812870: exit status 115 (599.97265ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32244 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-812870 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 config get cpus: exit status 14 (68.541157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 config get cpus: exit status 14 (57.298323ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812870 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-812870 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1159443: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.87177ms)

                                                
                                                
-- stdout --
	* [functional-812870] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:13:56.138539 1159157 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:13:56.138776 1159157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:13:56.138806 1159157 out.go:309] Setting ErrFile to fd 2...
	I0717 21:13:56.138826 1159157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:13:56.139120 1159157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:13:56.139527 1159157 out.go:303] Setting JSON to false
	I0717 21:13:56.140539 1159157 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21380,"bootTime":1689607057,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:13:56.140631 1159157 start.go:138] virtualization:  
	I0717 21:13:56.142789 1159157 out.go:177] * [functional-812870] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:13:56.145123 1159157 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:13:56.147140 1159157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:13:56.145333 1159157 notify.go:220] Checking for updates...
	I0717 21:13:56.151194 1159157 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:13:56.153282 1159157 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:13:56.154781 1159157 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:13:56.156192 1159157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:13:56.158303 1159157 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:13:56.158915 1159157 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:13:56.183203 1159157 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:13:56.183310 1159157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:13:56.273829 1159157 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 21:13:56.263348462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:13:56.273935 1159157 docker.go:294] overlay module found
	I0717 21:13:56.275975 1159157 out.go:177] * Using the docker driver based on existing profile
	I0717 21:13:56.277581 1159157 start.go:298] selected driver: docker
	I0717 21:13:56.277597 1159157 start.go:880] validating driver "docker" against &{Name:functional-812870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-812870 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:13:56.277714 1159157 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:13:56.279895 1159157 out.go:177] 
	W0717 21:13:56.281423 1159157 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 21:13:56.282984 1159157 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-812870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-812870 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (219.720889ms)

                                                
                                                
-- stdout --
	* [functional-812870] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:13:55.920129 1159117 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:13:55.920372 1159117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:13:55.920384 1159117 out.go:309] Setting ErrFile to fd 2...
	I0717 21:13:55.920390 1159117 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:13:55.920790 1159117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:13:55.921146 1159117 out.go:303] Setting JSON to false
	I0717 21:13:55.922304 1159117 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21379,"bootTime":1689607057,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:13:55.922377 1159117 start.go:138] virtualization:  
	I0717 21:13:55.924753 1159117 out.go:177] * [functional-812870] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0717 21:13:55.927365 1159117 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:13:55.929304 1159117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:13:55.927528 1159117 notify.go:220] Checking for updates...
	I0717 21:13:55.932983 1159117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:13:55.934915 1159117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:13:55.936607 1159117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:13:55.938138 1159117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:13:55.940296 1159117 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:13:55.940916 1159117 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:13:55.969043 1159117 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:13:55.969141 1159117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:13:56.072139 1159117 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 21:13:56.060143067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:13:56.072265 1159117 docker.go:294] overlay module found
	I0717 21:13:56.074490 1159117 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 21:13:56.076271 1159117 start.go:298] selected driver: docker
	I0717 21:13:56.076294 1159117 start.go:880] validating driver "docker" against &{Name:functional-812870 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-812870 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:13:56.076419 1159117 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:13:56.078479 1159117 out.go:177] 
	W0717 21:13:56.080116 1159117 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 21:13:56.081855 1159117 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-812870 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-812870 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-qhpnz" [3d96dfa7-a1be-4e12-9d49-4f3ce1366905] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-qhpnz" [3d96dfa7-a1be-4e12-9d49-4f3ce1366905] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.008556575s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30708
functional_test.go:1674: http://192.168.49.2:30708: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-qhpnz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30708
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c243d21e-4a41-4cbe-9b62-304526d3bc0c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.040333313s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-812870 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-812870 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-812870 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-812870 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1752f162-5d69-486b-bfdd-6252f3847f0e] Pending
helpers_test.go:344: "sp-pod" [1752f162-5d69-486b-bfdd-6252f3847f0e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1752f162-5d69-486b-bfdd-6252f3847f0e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.014891346s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-812870 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-812870 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-812870 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7ddf9e1b-0e02-4c3b-95dd-86d5ade40165] Pending
helpers_test.go:344: "sp-pod" [7ddf9e1b-0e02-4c3b-95dd-86d5ade40165] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7ddf9e1b-0e02-4c3b-95dd-86d5ade40165] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.015961748s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-812870 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh -n functional-812870 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 cp functional-812870:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2222739730/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh -n functional-812870 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1135872/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /etc/test/nested/copy/1135872/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1135872.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /etc/ssl/certs/1135872.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1135872.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /usr/share/ca-certificates/1135872.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11358722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /etc/ssl/certs/11358722.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11358722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /usr/share/ca-certificates/11358722.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-812870 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh "sudo systemctl is-active docker": exit status 1 (363.343607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh "sudo systemctl is-active containerd": exit status 1 (327.753198ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1157362: os: process already finished
helpers_test.go:502: unable to terminate pid 1157223: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-812870 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b7ef00de-6a66-4fd3-a50f-8f2df4eccc9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b7ef00de-6a66-4fd3-a50f-8f2df4eccc9a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.01884868s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-812870 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.22.71 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-812870 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-812870 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-812870 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-5n9pt" [9aeafea3-3ce8-4014-9880-6f86c1a8c724] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-5n9pt" [9aeafea3-3ce8-4014-9880-6f86c1a8c724] Running
E0717 21:13:46.596119 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008097424s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "381.589519ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "83.582927ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service list -o json
functional_test.go:1493: Took "637.029965ms" to run "out/minikube-linux-arm64 -p functional-812870 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "416.915633ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "60.409531ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32255
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdany-port4041988705/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689628432911490966" to /tmp/TestFunctionalparallelMountCmdany-port4041988705/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689628432911490966" to /tmp/TestFunctionalparallelMountCmdany-port4041988705/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689628432911490966" to /tmp/TestFunctionalparallelMountCmdany-port4041988705/001/test-1689628432911490966
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 21:13 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 21:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 21:13 test-1689628432911490966
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh cat /mount-9p/test-1689628432911490966
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-812870 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cca93b1d-4ee0-4535-8d39-d0d5ddd02bdb] Pending
helpers_test.go:344: "busybox-mount" [cca93b1d-4ee0-4535-8d39-d0d5ddd02bdb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cca93b1d-4ee0-4535-8d39-d0d5ddd02bdb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cca93b1d-4ee0-4535-8d39-d0d5ddd02bdb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.016948029s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-812870 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdany-port4041988705/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32255
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdspecific-port1140742233/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (804.073571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdspecific-port1140742233/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh "sudo umount -f /mount-9p": exit status 1 (417.525584ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-812870 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdspecific-port1140742233/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T" /mount1: (1.16017369s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh "findmnt -T" /mount3
2023/07/17 21:14:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-812870 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-812870 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3229709801/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812870 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-812870
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812870 image ls --format short --alsologtostderr:
I0717 21:14:28.981517 1161902 out.go:296] Setting OutFile to fd 1 ...
I0717 21:14:28.981699 1161902 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:28.981709 1161902 out.go:309] Setting ErrFile to fd 2...
I0717 21:14:28.981714 1161902 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:28.981993 1161902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
I0717 21:14:28.982702 1161902 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:28.982832 1161902 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:28.983321 1161902 cli_runner.go:164] Run: docker container inspect functional-812870 --format={{.State.Status}}
I0717 21:14:29.011128 1161902 ssh_runner.go:195] Run: systemctl --version
I0717 21:14:29.011185 1161902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812870
I0717 21:14:29.031220 1161902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/functional-812870/id_rsa Username:docker}
I0717 21:14:29.127563 1161902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812870 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/kube-scheduler          | v1.27.3            | bcb9e554eaab6 | 57.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | fb73e92641fd5 | 68.1MB |
| gcr.io/google-containers/addon-resizer  | functional-812870  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-controller-manager | v1.27.3            | ab3683b584ae5 | 109MB  |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 39dfb036b0986 | 116MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 66bf2c914bf4d | 42.8MB |
| docker.io/library/nginx                 | latest             | 2002d33a54f72 | 196MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812870 image ls --format table --alsologtostderr:
I0717 21:14:29.564353 1162035 out.go:296] Setting OutFile to fd 1 ...
I0717 21:14:29.564649 1162035 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.564675 1162035 out.go:309] Setting ErrFile to fd 2...
I0717 21:14:29.564695 1162035 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.564991 1162035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
I0717 21:14:29.565674 1162035 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.565884 1162035 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.566447 1162035 cli_runner.go:164] Run: docker container inspect functional-812870 --format={{.State.Status}}
I0717 21:14:29.590989 1162035 ssh_runner.go:195] Run: systemctl --version
I0717 21:14:29.591048 1162035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812870
I0717 21:14:29.623971 1162035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/functional-812870/id_rsa Username:docker}
I0717 21:14:29.721645 1162035 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812870 image ls --format json --alsologtostderr:
[{"id":"2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef","docker.io/library/nginx@sha256:b02b0565e769314abcf0be98f78cb473bcf0a2280c11fd01a13f0043a62e5059"],"repoTags":["docker.io/library/nginx:latest"],"size":"196441873"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":["registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"116204496"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febb
c778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"42812731"},{"id":"1611
cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["r
egistry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":["registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"68099991"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.
io/echoserver-arm:1.8"],"size":"87536549"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":["registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"57615158"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scr
aper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-812870"],"size":"34114467"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0","registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"108667702"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/p
ause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812870 image ls --format json --alsologtostderr:
I0717 21:14:29.290078 1161955 out.go:296] Setting OutFile to fd 1 ...
I0717 21:14:29.290301 1161955 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.290313 1161955 out.go:309] Setting ErrFile to fd 2...
I0717 21:14:29.290319 1161955 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.290622 1161955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
I0717 21:14:29.291342 1161955 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.291532 1161955 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.292110 1161955 cli_runner.go:164] Run: docker container inspect functional-812870 --format={{.State.Status}}
I0717 21:14:29.314293 1161955 ssh_runner.go:195] Run: systemctl --version
I0717 21:14:29.314346 1161955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812870
I0717 21:14:29.337699 1161955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/functional-812870/id_rsa Username:docker}
I0717 21:14:29.431552 1161955 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812870 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:40199b09f65752fed2a540913a037a7a2c3120bd9d4cf20e7d85caafa66381d8
repoTags:
- docker.io/library/nginx:alpine
size: "42812731"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-812870
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:948423f9b566c1f1bfab123911520168c041193addb9157d7121eaf2bb5afc53
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "68099991"
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:4cc5890f8b0fc5fb3f8e07535254f8ad97d90a0335bedcc8773db4ad1e7481bf
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "57615158"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:699defe487a15c642f6f7718de0684e49f4353e6c63f93308d314aab4dedd090
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "116204496"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:06e413293f95c209052e171448fe17685f625c5edfbc7b63df5d87d07b4711c0
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "108667702"
- id: 2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
- docker.io/library/nginx@sha256:b02b0565e769314abcf0be98f78cb473bcf0a2280c11fd01a13f0043a62e5059
repoTags:
- docker.io/library/nginx:latest
size: "196441873"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812870 image ls --format yaml --alsologtostderr:
I0717 21:14:28.978660 1161901 out.go:296] Setting OutFile to fd 1 ...
I0717 21:14:28.978861 1161901 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:28.978886 1161901 out.go:309] Setting ErrFile to fd 2...
I0717 21:14:28.978905 1161901 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:28.979236 1161901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
I0717 21:14:28.979973 1161901 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:28.980188 1161901 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:28.980700 1161901 cli_runner.go:164] Run: docker container inspect functional-812870 --format={{.State.Status}}
I0717 21:14:29.002756 1161901 ssh_runner.go:195] Run: systemctl --version
I0717 21:14:29.002821 1161901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812870
I0717 21:14:29.024699 1161901 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/functional-812870/id_rsa Username:docker}
I0717 21:14:29.119062 1161901 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-812870 ssh pgrep buildkitd: exit status 1 (348.437243ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image build -t localhost/my-image:functional-812870 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image build -t localhost/my-image:functional-812870 testdata/build --alsologtostderr: (2.40022788s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-812870 image build -t localhost/my-image:functional-812870 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 191e3fa5f49
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-812870
--> 125efeb787c
Successfully tagged localhost/my-image:functional-812870
125efeb787c03ca990b30d5bee18bb70c79eed49597ffdd4215161636960a62c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-812870 image build -t localhost/my-image:functional-812870 testdata/build --alsologtostderr:
I0717 21:14:29.645032 1162041 out.go:296] Setting OutFile to fd 1 ...
I0717 21:14:29.648578 1162041 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.648623 1162041 out.go:309] Setting ErrFile to fd 2...
I0717 21:14:29.648646 1162041 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 21:14:29.649005 1162041 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
I0717 21:14:29.650731 1162041 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.651896 1162041 config.go:182] Loaded profile config "functional-812870": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 21:14:29.652524 1162041 cli_runner.go:164] Run: docker container inspect functional-812870 --format={{.State.Status}}
I0717 21:14:29.679699 1162041 ssh_runner.go:195] Run: systemctl --version
I0717 21:14:29.679753 1162041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-812870
I0717 21:14:29.700070 1162041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34036 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/functional-812870/id_rsa Username:docker}
I0717 21:14:29.799433 1162041 build_images.go:151] Building image from path: /tmp/build.3465525770.tar
I0717 21:14:29.799501 1162041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 21:14:29.810594 1162041 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3465525770.tar
I0717 21:14:29.815213 1162041 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3465525770.tar: stat -c "%s %y" /var/lib/minikube/build/build.3465525770.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3465525770.tar': No such file or directory
I0717 21:14:29.815247 1162041 ssh_runner.go:362] scp /tmp/build.3465525770.tar --> /var/lib/minikube/build/build.3465525770.tar (3072 bytes)
I0717 21:14:29.845279 1162041 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3465525770
I0717 21:14:29.856652 1162041 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3465525770 -xf /var/lib/minikube/build/build.3465525770.tar
I0717 21:14:29.868415 1162041 crio.go:297] Building image: /var/lib/minikube/build/build.3465525770
I0717 21:14:29.868495 1162041 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-812870 /var/lib/minikube/build/build.3465525770 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0717 21:14:31.929193 1162041 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-812870 /var/lib/minikube/build/build.3465525770 --cgroup-manager=cgroupfs: (2.06065297s)
I0717 21:14:31.929261 1162041 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3465525770
I0717 21:14:31.940317 1162041 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3465525770.tar
I0717 21:14:31.953586 1162041 build_images.go:207] Built localhost/my-image:functional-812870 from /tmp/build.3465525770.tar
I0717 21:14:31.953615 1162041 build_images.go:123] succeeded building to: functional-812870
I0717 21:14:31.953620 1162041 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.729484343s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-812870
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr: (4.766891529s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr: (2.639368019s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.460956638s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-812870
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image load --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr: (3.5751463s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image save gcr.io/google-containers/addon-resizer:functional-812870 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image rm gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.035238805s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-812870
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-812870 image save --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-812870 image save --daemon gcr.io/google-containers/addon-resizer:functional-812870 --alsologtostderr: (2.661411507s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-812870
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-812870
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-812870
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-812870
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-822297 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 21:16:02.754651 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-822297 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m34.995738868s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (95.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons enable ingress --alsologtostderr -v=5: (12.548561954s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-822297 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-560548 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0717 21:19:46.234950 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-560548 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.373661916s)
--- PASS: TestJSONOutput/start/Command (80.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-560548 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-560548 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-560548 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-560548 --output=json --user=testUser: (5.858671129s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-691406 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-691406 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.340588ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"574fb46b-05c2-45f0-bcc2-2bd114ca2de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-691406] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"06cc483c-d235-46f1-bbbd-d5f8731d830e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"3797a377-316e-4ca7-8dc3-bc817a822fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"846750b2-f7e4-4ef6-92e4-f8dc89add805","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig"}}
	{"specversion":"1.0","id":"7f3393d2-3a52-45c7-96f3-f417a1ff3b63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube"}}
	{"specversion":"1.0","id":"09c7ed99-d2bb-4fb1-8b4b-8e0f5277334b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"690e6e8d-64b9-45db-9334-98fcee0ba5b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"768e2b24-0e6f-4330-b3c3-eb0cda8f2f2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-691406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-691406
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-571096 --network=
E0717 21:21:02.753812 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:21:08.155988 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:21:23.384550 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.389803 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.400025 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.420271 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.460569 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.540805 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:23.701220 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:24.021839 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:24.662657 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:25.943430 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:28.503637 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:21:33.624498 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-571096 --network=: (41.198639684s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
E0717 21:21:43.865248 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "docker-network-571096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-571096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-571096: (2.225385711s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-870435 --network=bridge
E0717 21:22:04.346356 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-870435 --network=bridge: (33.4564389s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-870435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-870435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-870435: (1.971023923s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.45s)

                                                
                                    
x
+
TestKicExistingNetwork (36.05s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-932392 --network=existing-network
E0717 21:22:45.307380 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-932392 --network=existing-network: (33.782538543s)
helpers_test.go:175: Cleaning up "existing-network-932392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-932392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-932392: (2.10336832s)
--- PASS: TestKicExistingNetwork (36.05s)

                                                
                                    
x
+
TestKicCustomSubnet (35.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-576647 --subnet=192.168.60.0/24
E0717 21:23:24.313279 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-576647 --subnet=192.168.60.0/24: (33.451948251s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-576647 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-576647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-576647
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-576647: (2.067281219s)
--- PASS: TestKicCustomSubnet (35.54s)

                                                
                                    
x
+
TestKicStaticIP (37.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-314822 --static-ip=192.168.200.200
E0717 21:23:51.996831 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:24:07.228026 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-314822 --static-ip=192.168.200.200: (35.256742262s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-314822 ip
helpers_test.go:175: Cleaning up "static-ip-314822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-314822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-314822: (2.045671871s)
--- PASS: TestKicStaticIP (37.46s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-744451 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-744451 --driver=docker  --container-runtime=crio: (31.151268933s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-747233 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-747233 --driver=docker  --container-runtime=crio: (32.141829083s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-744451
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-747233
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-747233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-747233
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-747233: (1.983731835s)
helpers_test.go:175: Cleaning up "first-744451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-744451
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-744451: (1.961972314s)
--- PASS: TestMinikubeProfile (68.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-118006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-118006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.023438573s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-118006 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-120222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-120222 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.712502315s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-120222 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-118006 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-118006 --alsologtostderr -v=5: (1.687349891s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-120222 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-120222
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-120222: (1.214803145s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-120222
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-120222: (7.865111357s)
--- PASS: TestMountStart/serial/RestartStopped (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-120222 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (123.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-810165 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 21:26:02.754489 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:26:23.384520 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:26:51.069216 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:27:25.797543 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-810165 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.993372423s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (123.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-810165 -- rollout status deployment/busybox: (4.328009237s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-mdhfd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-810165 -- exec busybox-67b7f59bb-zhxtx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-810165 -v 3 --alsologtostderr
E0717 21:28:24.313351 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-810165 -v 3 --alsologtostderr: (46.868596762s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.59s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp testdata/cp-test.txt multinode-810165:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2556453665/001/cp-test_multinode-810165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165:/home/docker/cp-test.txt multinode-810165-m02:/home/docker/cp-test_multinode-810165_multinode-810165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test_multinode-810165_multinode-810165-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165:/home/docker/cp-test.txt multinode-810165-m03:/home/docker/cp-test_multinode-810165_multinode-810165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test_multinode-810165_multinode-810165-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp testdata/cp-test.txt multinode-810165-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2556453665/001/cp-test_multinode-810165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m02:/home/docker/cp-test.txt multinode-810165:/home/docker/cp-test_multinode-810165-m02_multinode-810165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test_multinode-810165-m02_multinode-810165.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m02:/home/docker/cp-test.txt multinode-810165-m03:/home/docker/cp-test_multinode-810165-m02_multinode-810165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test_multinode-810165-m02_multinode-810165-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp testdata/cp-test.txt multinode-810165-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2556453665/001/cp-test_multinode-810165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m03:/home/docker/cp-test.txt multinode-810165:/home/docker/cp-test_multinode-810165-m03_multinode-810165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165 "sudo cat /home/docker/cp-test_multinode-810165-m03_multinode-810165.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 cp multinode-810165-m03:/home/docker/cp-test.txt multinode-810165-m02:/home/docker/cp-test_multinode-810165-m03_multinode-810165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 ssh -n multinode-810165-m02 "sudo cat /home/docker/cp-test_multinode-810165-m03_multinode-810165-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-810165 node stop m03: (1.258558368s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-810165 status: exit status 7 (566.371271ms)

                                                
                                                
-- stdout --
	multinode-810165
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-810165-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-810165-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr: exit status 7 (557.361149ms)

                                                
                                                
-- stdout --
	multinode-810165
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-810165-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-810165-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:29:03.779052 1208860 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:29:03.779216 1208860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:29:03.779227 1208860 out.go:309] Setting ErrFile to fd 2...
	I0717 21:29:03.779232 1208860 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:29:03.779628 1208860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:29:03.779861 1208860 out.go:303] Setting JSON to false
	I0717 21:29:03.779925 1208860 mustload.go:65] Loading cluster: multinode-810165
	I0717 21:29:03.780718 1208860 notify.go:220] Checking for updates...
	I0717 21:29:03.780992 1208860 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:29:03.781029 1208860 status.go:255] checking status of multinode-810165 ...
	I0717 21:29:03.781709 1208860 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:29:03.802086 1208860 status.go:330] multinode-810165 host status = "Running" (err=<nil>)
	I0717 21:29:03.802106 1208860 host.go:66] Checking if "multinode-810165" exists ...
	I0717 21:29:03.802408 1208860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165
	I0717 21:29:03.822504 1208860 host.go:66] Checking if "multinode-810165" exists ...
	I0717 21:29:03.822850 1208860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:29:03.822916 1208860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165
	I0717 21:29:03.842623 1208860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34101 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165/id_rsa Username:docker}
	I0717 21:29:03.936235 1208860 ssh_runner.go:195] Run: systemctl --version
	I0717 21:29:03.942803 1208860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:29:03.958563 1208860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:29:04.033409 1208860 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-17 21:29:04.022880321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:29:04.034020 1208860 kubeconfig.go:92] found "multinode-810165" server: "https://192.168.58.2:8443"
	I0717 21:29:04.034045 1208860 api_server.go:166] Checking apiserver status ...
	I0717 21:29:04.034089 1208860 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:29:04.048049 1208860 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup
	I0717 21:29:04.060298 1208860 api_server.go:182] apiserver freezer: "5:freezer:/docker/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/crio/crio-7fa4c144efb62f222704cd1994446732d77d17e415b1ccd4b8afc6c25d3280b1"
	I0717 21:29:04.060370 1208860 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f50b1a146f82a35b69acbef510044a71ef0ffd7f7e690b15ca461dd5db496271/crio/crio-7fa4c144efb62f222704cd1994446732d77d17e415b1ccd4b8afc6c25d3280b1/freezer.state
	I0717 21:29:04.072089 1208860 api_server.go:204] freezer state: "THAWED"
	I0717 21:29:04.072119 1208860 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 21:29:04.081627 1208860 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 21:29:04.081664 1208860 status.go:421] multinode-810165 apiserver status = Running (err=<nil>)
	I0717 21:29:04.081677 1208860 status.go:257] multinode-810165 status: &{Name:multinode-810165 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:29:04.081694 1208860 status.go:255] checking status of multinode-810165-m02 ...
	I0717 21:29:04.082005 1208860 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Status}}
	I0717 21:29:04.101429 1208860 status.go:330] multinode-810165-m02 host status = "Running" (err=<nil>)
	I0717 21:29:04.101455 1208860 host.go:66] Checking if "multinode-810165-m02" exists ...
	I0717 21:29:04.101769 1208860 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-810165-m02
	I0717 21:29:04.120593 1208860 host.go:66] Checking if "multinode-810165-m02" exists ...
	I0717 21:29:04.120908 1208860 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:29:04.120959 1208860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-810165-m02
	I0717 21:29:04.139081 1208860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34106 SSHKeyPath:/home/jenkins/minikube-integration/16890-1130480/.minikube/machines/multinode-810165-m02/id_rsa Username:docker}
	I0717 21:29:04.235959 1208860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:29:04.252674 1208860 status.go:257] multinode-810165-m02 status: &{Name:multinode-810165-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:29:04.252704 1208860 status.go:255] checking status of multinode-810165-m03 ...
	I0717 21:29:04.253048 1208860 cli_runner.go:164] Run: docker container inspect multinode-810165-m03 --format={{.State.Status}}
	I0717 21:29:04.272103 1208860 status.go:330] multinode-810165-m03 host status = "Stopped" (err=<nil>)
	I0717 21:29:04.272131 1208860 status.go:343] host is not running, skipping remaining checks
	I0717 21:29:04.272138 1208860 status.go:257] multinode-810165-m03 status: &{Name:multinode-810165-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-810165 node start m03 --alsologtostderr: (11.503005233s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-810165
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-810165
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-810165: (25.196961683s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-810165 --wait=true -v=8 --alsologtostderr
E0717 21:31:02.754031 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-810165 --wait=true -v=8 --alsologtostderr: (1m36.862356355s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-810165
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-810165 node delete m03: (4.32210617s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
E0717 21:31:23.384258 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-810165 stop: (23.870578515s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-810165 status: exit status 7 (91.329207ms)

                                                
                                                
-- stdout --
	multinode-810165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-810165-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr: exit status 7 (88.621033ms)

                                                
                                                
-- stdout --
	multinode-810165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-810165-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:31:47.926440 1217047 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:31:47.926985 1217047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:31:47.926998 1217047 out.go:309] Setting ErrFile to fd 2...
	I0717 21:31:47.927004 1217047 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:31:47.927391 1217047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:31:47.927693 1217047 out.go:303] Setting JSON to false
	I0717 21:31:47.927755 1217047 mustload.go:65] Loading cluster: multinode-810165
	I0717 21:31:47.928464 1217047 config.go:182] Loaded profile config "multinode-810165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:31:47.928494 1217047 status.go:255] checking status of multinode-810165 ...
	I0717 21:31:47.929210 1217047 cli_runner.go:164] Run: docker container inspect multinode-810165 --format={{.State.Status}}
	I0717 21:31:47.931385 1217047 notify.go:220] Checking for updates...
	I0717 21:31:47.950407 1217047 status.go:330] multinode-810165 host status = "Stopped" (err=<nil>)
	I0717 21:31:47.950428 1217047 status.go:343] host is not running, skipping remaining checks
	I0717 21:31:47.950435 1217047 status.go:257] multinode-810165 status: &{Name:multinode-810165 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 21:31:47.950457 1217047 status.go:255] checking status of multinode-810165-m02 ...
	I0717 21:31:47.950754 1217047 cli_runner.go:164] Run: docker container inspect multinode-810165-m02 --format={{.State.Status}}
	I0717 21:31:47.968230 1217047 status.go:330] multinode-810165-m02 host status = "Stopped" (err=<nil>)
	I0717 21:31:47.968253 1217047 status.go:343] host is not running, skipping remaining checks
	I0717 21:31:47.968260 1217047 status.go:257] multinode-810165-m02 status: &{Name:multinode-810165-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-810165 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-810165 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.661689688s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-810165 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-810165
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-810165-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-810165-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.757407ms)

                                                
                                                
-- stdout --
	* [multinode-810165-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-810165-m02' is duplicated with machine name 'multinode-810165-m02' in profile 'multinode-810165'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-810165-m03 --driver=docker  --container-runtime=crio
E0717 21:33:24.313291 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-810165-m03 --driver=docker  --container-runtime=crio: (29.300956518s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-810165
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-810165: exit status 80 (345.785021ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-810165
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-810165-m03 already exists in multinode-810165-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-810165-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-810165-m03: (2.008975695s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.80s)

                                                
                                    
x
+
TestPreload (145.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-934856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 21:34:47.357652 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-934856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.842400449s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-934856 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-934856 image pull gcr.io/k8s-minikube/busybox: (2.191824808s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-934856
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-934856: (5.834389388s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-934856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0717 21:36:02.754646 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-934856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.256711701s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-934856 image list
helpers_test.go:175: Cleaning up "test-preload-934856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-934856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-934856: (2.416159633s)
--- PASS: TestPreload (145.79s)

                                                
                                    
x
+
TestScheduledStopUnix (108.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-817216 --memory=2048 --driver=docker  --container-runtime=crio
E0717 21:36:23.384513 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-817216 --memory=2048 --driver=docker  --container-runtime=crio: (31.815487618s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817216 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-817216 -n scheduled-stop-817216
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817216 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817216 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817216 -n scheduled-stop-817216
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817216
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-817216 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 21:37:46.429507 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-817216
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-817216: exit status 7 (68.400281ms)

                                                
                                                
-- stdout --
	scheduled-stop-817216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817216 -n scheduled-stop-817216
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-817216 -n scheduled-stop-817216: exit status 7 (71.55326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-817216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-817216
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-817216: (5.448235217s)
--- PASS: TestScheduledStopUnix (108.91s)

                                                
                                    
x
+
TestInsufficientStorage (10.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-571007 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-571007 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.899756738s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"721f4ca1-9839-4497-b97f-4c71635a943b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-571007] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dae51d56-7d8c-4ec9-a0b3-6b33c7b5dcfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"29db8f94-e20a-4bce-9312-57359adafe51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8325ab18-8388-46f3-bf79-74ce34df20e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig"}}
	{"specversion":"1.0","id":"bc808d99-1149-4515-a5b0-350370905e73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube"}}
	{"specversion":"1.0","id":"7b5c9268-4d4a-45e1-bce7-4c81f8e5b5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b7d360ae-4cce-4f3a-8861-928b1c7e0868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39e730a9-ed94-41d1-950e-a976282b7dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ad854a93-7f7a-4852-a687-b4bd81b9fca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7277e2f4-45cb-487c-8876-c3c79b4b8cea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7b02084-6048-4e48-b2d2-b2acf9499cbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dce1937a-c556-489a-9410-35367e05a133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-571007 in cluster insufficient-storage-571007","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"42e47c9c-273b-46f1-a6e4-65e97e19cc2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f0b96eb-9301-489f-b721-46e093b9eed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"83598896-b6f5-4ed8-9c67-4e5a2a2a3319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-571007 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-571007 --output=json --layout=cluster: exit status 7 (311.976234ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-571007","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-571007","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 21:38:09.476087 1233562 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-571007" does not appear in /home/jenkins/minikube-integration/16890-1130480/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-571007 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-571007 --output=json --layout=cluster: exit status 7 (307.66479ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-571007","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-571007","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 21:38:09.784642 1233619 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-571007" does not appear in /home/jenkins/minikube-integration/16890-1130480/kubeconfig
	E0717 21:38:09.797694 1233619 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/insufficient-storage-571007/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-571007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-571007
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-571007: (1.934002737s)
--- PASS: TestInsufficientStorage (10.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (401.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m13.664927195s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-845686
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-845686: (1.416724566s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-845686 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-845686 status --format={{.Host}}: exit status 7 (71.520544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0717 21:41:02.754521 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.3418114s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-845686 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (86.608987ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-845686] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-845686
	    minikube start -p kubernetes-upgrade-845686 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8456862 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-845686 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845686 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.31177565s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-845686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-845686
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-845686: (2.34730931s)
--- PASS: TestKubernetesUpgrade (401.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (77.668689ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-189597] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189597 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189597 --driver=docker  --container-runtime=crio: (42.734717668s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-189597 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --driver=docker  --container-runtime=crio: (7.484869919s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-189597 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-189597 status -o json: exit status 2 (652.615978ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-189597","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-189597
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-189597: (2.243077671s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189597 --no-kubernetes --driver=docker  --container-runtime=crio: (9.742839382s)
--- PASS: TestNoKubernetes/serial/Start (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-189597 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-189597 "sudo systemctl is-active --quiet service kubelet": exit status 1 (405.996647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-189597
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-189597: (1.327271169s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-189597 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-189597 --driver=docker  --container-runtime=crio: (7.664814993s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-189597 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-189597 "sudo systemctl is-active --quiet service kubelet": exit status 1 (512.671661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-189380
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestPause/serial/Start (77.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-176778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0717 21:44:05.798356 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-176778 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.82579775s)
--- PASS: TestPause/serial/Start (77.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-176778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-176778 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.073892047s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.13s)

                                                
                                    
x
+
TestPause/serial/Pause (1.47s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-176778 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-176778 --alsologtostderr -v=5: (1.468987936s)
--- PASS: TestPause/serial/Pause (1.47s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-176778 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-176778 --output=json --layout=cluster: exit status 2 (627.181282ms)

                                                
                                                
-- stdout --
	{"Name":"pause-176778","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-176778","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.63s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.21s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-176778 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-176778 --alsologtostderr -v=5: (1.21295054s)
--- PASS: TestPause/serial/Unpause (1.21s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-176778 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-176778 --alsologtostderr -v=5: (1.639402494s)
--- PASS: TestPause/serial/PauseAgain (1.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-176778 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-176778 --alsologtostderr -v=5: (3.267050342s)
--- PASS: TestPause/serial/DeletePaused (3.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (12.811877505s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-176778
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-176778: exit status 1 (25.91325ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-176778: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (12.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-247119 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-247119 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (277.51987ms)

                                                
                                                
-- stdout --
	* [false-247119] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 21:46:13.991557 1271997 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:46:13.991830 1271997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:46:13.991858 1271997 out.go:309] Setting ErrFile to fd 2...
	I0717 21:46:13.991878 1271997 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:46:13.992172 1271997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-1130480/.minikube/bin
	I0717 21:46:13.992651 1271997 out.go:303] Setting JSON to false
	I0717 21:46:13.993827 1271997 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23317,"bootTime":1689607057,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0717 21:46:13.993919 1271997 start.go:138] virtualization:  
	I0717 21:46:13.996324 1271997 out.go:177] * [false-247119] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 21:46:13.998445 1271997 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 21:46:13.998529 1271997 notify.go:220] Checking for updates...
	I0717 21:46:14.003161 1271997 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:46:14.005045 1271997 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-1130480/kubeconfig
	I0717 21:46:14.007035 1271997 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-1130480/.minikube
	I0717 21:46:14.008916 1271997 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 21:46:14.010927 1271997 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:46:14.013428 1271997 config.go:182] Loaded profile config "force-systemd-flag-501082": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:46:14.013643 1271997 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:46:14.044276 1271997 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:46:14.044365 1271997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:46:14.197476 1271997 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 21:46:14.186892438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215175168 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 21:46:14.197574 1271997 docker.go:294] overlay module found
	I0717 21:46:14.199743 1271997 out.go:177] * Using the docker driver based on user configuration
	I0717 21:46:14.201476 1271997 start.go:298] selected driver: docker
	I0717 21:46:14.201493 1271997 start.go:880] validating driver "docker" against <nil>
	I0717 21:46:14.201506 1271997 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:46:14.203935 1271997 out.go:177] 
	W0717 21:46:14.205471 1271997 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 21:46:14.207028 1271997 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-247119 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-247119

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-247119"

                                                
                                                
----------------------- debugLogs end: false-247119 [took: 4.033335163s] --------------------------------
helpers_test.go:175: Cleaning up "false-247119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-247119
--- PASS: TestNetworkPlugins/group/false (4.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-217693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0717 21:48:24.313344 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-217693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m3.621478308s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-217693 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [40c00562-a85d-4d26-b0a6-4dcf3e374e81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [40c00562-a85d-4d26-b0a6-4dcf3e374e81] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.034324847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-217693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-217693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-217693 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-217693 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-217693 --alsologtostderr -v=3: (12.151464622s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-217693 -n old-k8s-version-217693
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-217693 -n old-k8s-version-217693: exit status 7 (74.681944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-217693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (433.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-217693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-217693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m12.584668377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-217693 -n old-k8s-version-217693
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (433.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-667323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 21:51:02.754550 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:51:23.384781 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:51:27.357839 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-667323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m9.599023777s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-667323 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [46fb1d06-5870-4405-ba91-98c22de73711] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [46fb1d06-5870-4405-ba91-98c22de73711] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.034492933s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-667323 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-667323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-667323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.157188749s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-667323 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-667323 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-667323 --alsologtostderr -v=3: (12.107095723s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-667323 -n no-preload-667323
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-667323 -n no-preload-667323: exit status 7 (79.884883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-667323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (611.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-667323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 21:53:24.312839 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 21:54:26.429881 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 21:56:02.754591 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 21:56:23.385051 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-667323 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m11.494391306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-667323 -n no-preload-667323
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (611.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-584qd" [888a19e3-2716-423c-8b52-afc7d17d78c6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025123104s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-584qd" [888a19e3-2716-423c-8b52-afc7d17d78c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006913063s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-217693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-217693 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-217693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-217693 --alsologtostderr -v=1: (1.400496912s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-217693 -n old-k8s-version-217693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-217693 -n old-k8s-version-217693: exit status 2 (572.917241ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-217693 -n old-k8s-version-217693
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-217693 -n old-k8s-version-217693: exit status 2 (477.017526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-217693 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-217693 --alsologtostderr -v=1: (1.215806511s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-217693 -n old-k8s-version-217693
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-217693 -n old-k8s-version-217693
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-700575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 21:58:24.313418 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-700575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m24.461288808s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700575 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [939af487-3d25-44cf-be8a-8bf3887fc010] Pending
helpers_test.go:344: "busybox" [939af487-3d25-44cf-be8a-8bf3887fc010] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [939af487-3d25-44cf-be8a-8bf3887fc010] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.030319351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-700575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-700575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.100111556s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-700575 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-700575 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-700575 --alsologtostderr -v=3: (12.173951702s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700575 -n embed-certs-700575
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700575 -n embed-certs-700575: exit status 7 (73.286135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-700575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (619.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-700575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 21:59:49.969002 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:49.974435 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:49.984744 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:50.008028 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:50.048290 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:50.128586 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:50.288881 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:50.609387 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:51.249523 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:52.530383 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 21:59:55.090590 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:00:00.212192 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:00:10.453367 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:00:30.933572 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:00:45.798568 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 22:01:02.754483 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 22:01:11.894110 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:01:23.384839 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-700575 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m19.275673748s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-700575 -n embed-certs-700575
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (619.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6b5sw" [a6ca14da-c4a8-4236-abfe-e6bae460b82b] Running
E0717 22:02:33.814329 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.02548092s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6b5sw" [a6ca14da-c4a8-4236-abfe-e6bae460b82b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006913489s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-667323 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-667323 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-667323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-667323 -n no-preload-667323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-667323 -n no-preload-667323: exit status 2 (340.065738ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-667323 -n no-preload-667323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-667323 -n no-preload-667323: exit status 2 (361.676046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-667323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-667323 -n no-preload-667323
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-667323 -n no-preload-667323
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-945748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:03:24.313187 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-945748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (45.972514367s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945748 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d810a41-da02-470f-8492-0f17864d7f04] Pending
helpers_test.go:344: "busybox" [0d810a41-da02-470f-8492-0f17864d7f04] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d810a41-da02-470f-8492-0f17864d7f04] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.032348914s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-945748 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-945748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-945748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.196563848s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-945748 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-945748 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-945748 --alsologtostderr -v=3: (12.124531977s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748: exit status 7 (84.427412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-945748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-945748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:04:49.968328 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:05:17.655508 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
E0717 22:06:02.754436 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 22:06:23.384604 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 22:06:54.100263 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.105566 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.115855 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.136124 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.176474 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.256767 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.417149 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:54.737628 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:55.378507 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:56.659400 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:06:59.219624 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:07:04.340305 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:07:14.580944 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:07:35.061242 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:08:07.357991 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 22:08:16.021789 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:08:24.312886 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
E0717 22:09:37.942913 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:09:49.968833 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-945748 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (10m2.819336855s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mth8v" [938d2426-9b3a-4ce8-936c-293e8107aa3a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026784779s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mth8v" [938d2426-9b3a-4ce8-936c-293e8107aa3a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008514469s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-700575 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-700575 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-700575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700575 -n embed-certs-700575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700575 -n embed-certs-700575: exit status 2 (373.648063ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700575 -n embed-certs-700575
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700575 -n embed-certs-700575: exit status 2 (369.209559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-700575 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-700575 -n embed-certs-700575
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-700575 -n embed-certs-700575
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-050871 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-050871 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (43.646689848s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-050871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-050871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.186740606s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-050871 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-050871 --alsologtostderr -v=3: (1.253885146s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050871 -n newest-cni-050871
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050871 -n newest-cni-050871: exit status 7 (72.573274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-050871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-050871 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:11:02.754235 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 22:11:06.430065 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
E0717 22:11:23.384622 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-050871 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (30.044854196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-050871 -n newest-cni-050871
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-050871 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-050871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050871 -n newest-cni-050871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050871 -n newest-cni-050871: exit status 2 (358.562618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050871 -n newest-cni-050871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050871 -n newest-cni-050871: exit status 2 (351.964812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-050871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-050871 -n newest-cni-050871
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-050871 -n newest-cni-050871
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0717 22:11:54.100006 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
E0717 22:12:21.783734 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/no-preload-667323/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (50.430238165s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f695n" [10e5197a-4a77-40df-8d15-a2b825c3f774] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f695n" [10e5197a-4a77-40df-8d15-a2b825c3f774] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.036794395s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0717 22:13:24.313492 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.055466269s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrkjl" [ffa4c711-883a-4bb4-a6bb-9449ea3dc9fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024045497s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrkjl" [ffa4c711-883a-4bb4-a6bb-9449ea3dc9fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007164521s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-945748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-945748 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-945748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748: exit status 2 (372.657227ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748: exit status 2 (346.623402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-945748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-945748 -n default-k8s-diff-port-945748
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)
E0717 22:18:52.886886 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/default-k8s-diff-port-945748/client.crt: no such file or directory
E0717 22:19:13.367996 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/default-k8s-diff-port-945748/client.crt: no such file or directory
E0717 22:19:16.005121 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.011465 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.021731 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.042025 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.082276 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.162673 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.323139 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:16.643658 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:17.284579 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:18.565193 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:21.125399 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:26.246426 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
E0717 22:19:36.486632 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.616201319s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bgcfp" [d1ed7bbb-9f33-4db7-8e7b-4333830f2564] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.042640459s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kb7xp" [ef4366db-6d90-4855-a82f-f5a2b036d7b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kb7xp" [ef4366db-6d90-4855-a82f-f5a2b036d7b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.012352265s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m17.746539211s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jmj95" [dcd7ea0d-da3d-43ff-9196-225af572d287] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.039521245s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jj9gz" [35a11d00-bb62-47f2-b837-4497a6c4e8f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jj9gz" [35a11d00-bb62-47f2-b837-4497a6c4e8f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.022259484s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.977392025s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hqpxg" [8a170116-52b8-428f-9c4d-dab0190a6b4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 22:16:23.384390 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/ingress-addon-legacy-822297/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-hqpxg" [8a170116-52b8-428f-9c4d-dab0190a6b4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.016315856s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0717 22:17:24.949170 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:24.954391 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:24.964607 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:24.984829 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:25.025074 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:25.105275 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:25.265658 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:25.586122 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:25.798791 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/addons-966885/client.crt: no such file or directory
E0717 22:17:26.226389 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:27.506597 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:30.066919 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
E0717 22:17:35.187994 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m9.39869465s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-247119 replace --force -f testdata/netcat-deployment.yaml
E0717 22:17:45.428314 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/auto-247119/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n4tj4" [266f3064-9450-4f06-aeb9-714e3b7b9768] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n4tj4" [266f3064-9450-4f06-aeb9-714e3b7b9768] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.01031058s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sqg7x" [ef4eaf5b-01c4-4cc6-9672-06e0a08300f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.039318767s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-cvzf8" [15530a3e-af40-40fa-85f2-f4ac1e5b0146] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-cvzf8" [15530a3e-af40-40fa-85f2-f4ac1e5b0146] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.009131866s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-247119 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.550618067s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0717 22:18:24.312866 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/functional-812870/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-247119 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-247119 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kxnkx" [acc724eb-66a8-47cb-80ba-4d1b3ea49ce5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 22:19:49.968526 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/old-k8s-version-217693/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-kxnkx" [acc724eb-66a8-47cb-80ba-4d1b3ea49ce5] Running
E0717 22:19:54.329024 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/default-k8s-diff-port-945748/client.crt: no such file or directory
E0717 22:19:56.967238 1135872 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-1130480/.minikube/profiles/kindnet-247119/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.018679856s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-247119 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-247119 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-401645 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-401645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-401645
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-539686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-539686
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-247119 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-247119

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-247119"

                                                
                                                
----------------------- debugLogs end: kubenet-247119 [took: 4.175311385s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-247119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-247119
--- SKIP: TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-247119 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-247119" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-247119

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-247119" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-247119"

                                                
                                                
----------------------- debugLogs end: cilium-247119 [took: 4.57303171s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-247119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-247119
--- SKIP: TestNetworkPlugins/group/cilium (4.81s)

                                                
                                    
Copied to clipboard