Test Report: Docker_Linux_crio_arm64 17145

                    
                      18848273edc5eb926291da53102e5aefa8069f6f:2023-08-30:30788
                    
                

Test fail (7/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 174.16
88 TestFunctional/parallel/PersistentVolumeClaim 189.47
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.31
204 TestMultiNode/serial/PingHostFrom2Pods 4.87
225 TestRunningBinaryUpgrade 67.89
228 TestMissingContainerUpgrade 99.09
260 TestStoppedBinaryUpgrade/Upgrade 107.64
x
+
TestAddons/parallel/Ingress (174.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-934429 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-934429 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-934429 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [10ce1587-2398-4a36-af04-d6781e917c9b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [10ce1587-2398-4a36-af04-d6781e917c9b] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.026284924s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-934429 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.468944411s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-934429 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-934429 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.002720028s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.103157753s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-934429 addons disable ingress --alsologtostderr -v=1: (7.72786789s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-934429
helpers_test.go:235: (dbg) docker inspect addons-934429:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc",
	        "Created": "2023-08-30T21:38:18.180954239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 990786,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T21:38:18.498259408Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc/hostname",
	        "HostsPath": "/var/lib/docker/containers/0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc/hosts",
	        "LogPath": "/var/lib/docker/containers/0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc/0542b2ab2af8657b2403938beb0ae01b05ada9b8ae95626b9bf6952b69a7eebc-json.log",
	        "Name": "/addons-934429",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-934429:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-934429",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/502a90ada62cb4c5ba79001a92c6cfb6187e2bbe8c61f124d12a7fe919eef560-init/diff:/var/lib/docker/overlay2/5a8abadbbe02000d4a1cbd31235f9b3bba474489fe1515f2d12f946a2d011f32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/502a90ada62cb4c5ba79001a92c6cfb6187e2bbe8c61f124d12a7fe919eef560/merged",
	                "UpperDir": "/var/lib/docker/overlay2/502a90ada62cb4c5ba79001a92c6cfb6187e2bbe8c61f124d12a7fe919eef560/diff",
	                "WorkDir": "/var/lib/docker/overlay2/502a90ada62cb4c5ba79001a92c6cfb6187e2bbe8c61f124d12a7fe919eef560/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-934429",
	                "Source": "/var/lib/docker/volumes/addons-934429/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-934429",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-934429",
	                "name.minikube.sigs.k8s.io": "addons-934429",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f1fc88327c3770cc20e510f0bdf3a54c586e6fb0ea08fe1fe620c433bbc9c206",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34012"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34009"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34011"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34010"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f1fc88327c37",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-934429": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0542b2ab2af8",
	                        "addons-934429"
	                    ],
	                    "NetworkID": "b8e819fd544060f44f6d062f7dfe465afb3544ef402444664706966606cc63d0",
	                    "EndpointID": "221cdbefd279ab9036a9e99032eb5debb31566ea6f04682a816fb74846426432",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-934429 -n addons-934429
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-934429 logs -n 25: (1.624149147s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-136653   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |                     |
	|         | -p download-only-136653        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-136653   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |                     |
	|         | -p download-only-136653        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:37 UTC |
	| delete  | -p download-only-136653        | download-only-136653   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:37 UTC |
	| delete  | -p download-only-136653        | download-only-136653   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:37 UTC |
	| start   | --download-only -p             | download-docker-195060 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |                     |
	|         | download-docker-195060         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-195060      | download-docker-195060 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:37 UTC |
	| start   | --download-only -p             | binary-mirror-633577   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |                     |
	|         | binary-mirror-633577           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33847         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-633577        | binary-mirror-633577   | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:37 UTC |
	| start   | -p addons-934429               | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC | 30 Aug 23 21:40 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:40 UTC | 30 Aug 23 21:40 UTC |
	|         | addons-934429                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:40 UTC | 30 Aug 23 21:40 UTC |
	|         | -p addons-934429               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-934429 ip               | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:40 UTC | 30 Aug 23 21:40 UTC |
	| addons  | addons-934429 addons disable   | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:40 UTC | 30 Aug 23 21:40 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-934429 addons           | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:41 UTC | 30 Aug 23 21:41 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:41 UTC | 30 Aug 23 21:41 UTC |
	|         | addons-934429                  |                        |         |         |                     |                     |
	| ssh     | addons-934429 ssh curl -s      | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-934429 addons           | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:41 UTC | 30 Aug 23 21:42 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-934429 addons           | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:42 UTC | 30 Aug 23 21:42 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-934429 ip               | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:43 UTC | 30 Aug 23 21:43 UTC |
	| addons  | addons-934429 addons disable   | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:44 UTC | 30 Aug 23 21:44 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-934429 addons disable   | addons-934429          | jenkins | v1.31.2 | 30 Aug 23 21:44 UTC | 30 Aug 23 21:44 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:37:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:37:52.939869  990319 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:37:52.940062  990319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:52.940090  990319 out.go:309] Setting ErrFile to fd 2...
	I0830 21:37:52.940110  990319 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:52.940388  990319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 21:37:52.940830  990319 out.go:303] Setting JSON to false
	I0830 21:37:52.941915  990319 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22807,"bootTime":1693408666,"procs":417,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:37:52.942019  990319 start.go:138] virtualization:  
	I0830 21:37:52.945160  990319 out.go:177] * [addons-934429] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:37:52.947958  990319 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 21:37:52.951102  990319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:37:52.948127  990319 notify.go:220] Checking for updates...
	I0830 21:37:52.953856  990319 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:37:52.956058  990319 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:37:52.958210  990319 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 21:37:52.960260  990319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:37:52.962953  990319 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:37:52.986926  990319 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:37:52.987016  990319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:53.070711  990319 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 21:37:53.060585427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:53.070815  990319 docker.go:294] overlay module found
	I0830 21:37:53.074737  990319 out.go:177] * Using the docker driver based on user configuration
	I0830 21:37:53.076853  990319 start.go:298] selected driver: docker
	I0830 21:37:53.076875  990319 start.go:902] validating driver "docker" against <nil>
	I0830 21:37:53.076889  990319 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:37:53.077570  990319 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:53.147437  990319 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 21:37:53.137935685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:53.147604  990319 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:37:53.147899  990319 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:37:53.149850  990319 out.go:177] * Using Docker driver with root privileges
	I0830 21:37:53.151930  990319 cni.go:84] Creating CNI manager for ""
	I0830 21:37:53.151946  990319 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:37:53.151956  990319 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 21:37:53.151977  990319 start_flags.go:319] config:
	{Name:addons-934429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-934429 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:37:53.155124  990319 out.go:177] * Starting control plane node addons-934429 in cluster addons-934429
	I0830 21:37:53.157084  990319 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 21:37:53.158946  990319 out.go:177] * Pulling base image ...
	I0830 21:37:53.160728  990319 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:37:53.160777  990319 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0830 21:37:53.160793  990319 cache.go:57] Caching tarball of preloaded images
	I0830 21:37:53.160823  990319 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 21:37:53.160865  990319 preload.go:174] Found /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0830 21:37:53.160875  990319 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:37:53.161395  990319 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/config.json ...
	I0830 21:37:53.161428  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/config.json: {Name:mk51a0b82068333a039fbc8bb716b3fa5f29d085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:37:53.177853  990319 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0830 21:37:53.177968  990319 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0830 21:37:53.177991  990319 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory, skipping pull
	I0830 21:37:53.177999  990319 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in cache, skipping pull
	I0830 21:37:53.178006  990319 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0830 21:37:53.178012  990319 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad from local cache
	I0830 21:38:09.078428  990319 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad from cached tarball
	I0830 21:38:09.078471  990319 cache.go:195] Successfully downloaded all kic artifacts
	I0830 21:38:09.078548  990319 start.go:365] acquiring machines lock for addons-934429: {Name:mk027be4266aebaeeffbea62727996d5fd3699c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:38:09.078689  990319 start.go:369] acquired machines lock for "addons-934429" in 121.156µs
	I0830 21:38:09.078717  990319 start.go:93] Provisioning new machine with config: &{Name:addons-934429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-934429 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:38:09.078802  990319 start.go:125] createHost starting for "" (driver="docker")
	I0830 21:38:09.081314  990319 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0830 21:38:09.081601  990319 start.go:159] libmachine.API.Create for "addons-934429" (driver="docker")
	I0830 21:38:09.081640  990319 client.go:168] LocalClient.Create starting
	I0830 21:38:09.081785  990319 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem
	I0830 21:38:10.314081  990319 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem
	I0830 21:38:11.814741  990319 cli_runner.go:164] Run: docker network inspect addons-934429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 21:38:11.832608  990319 cli_runner.go:211] docker network inspect addons-934429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 21:38:11.832738  990319 network_create.go:281] running [docker network inspect addons-934429] to gather additional debugging logs...
	I0830 21:38:11.832787  990319 cli_runner.go:164] Run: docker network inspect addons-934429
	W0830 21:38:11.851142  990319 cli_runner.go:211] docker network inspect addons-934429 returned with exit code 1
	I0830 21:38:11.851179  990319 network_create.go:284] error running [docker network inspect addons-934429]: docker network inspect addons-934429: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-934429 not found
	I0830 21:38:11.851198  990319 network_create.go:286] output of [docker network inspect addons-934429]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-934429 not found
	
	** /stderr **
	I0830 21:38:11.851269  990319 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 21:38:11.872612  990319 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000f66a10}
	I0830 21:38:11.872651  990319 network_create.go:123] attempt to create docker network addons-934429 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0830 21:38:11.872712  990319 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-934429 addons-934429
	I0830 21:38:11.964950  990319 network_create.go:107] docker network addons-934429 192.168.49.0/24 created
	I0830 21:38:11.964978  990319 kic.go:117] calculated static IP "192.168.49.2" for the "addons-934429" container
	I0830 21:38:11.965061  990319 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 21:38:11.986881  990319 cli_runner.go:164] Run: docker volume create addons-934429 --label name.minikube.sigs.k8s.io=addons-934429 --label created_by.minikube.sigs.k8s.io=true
	I0830 21:38:12.013036  990319 oci.go:103] Successfully created a docker volume addons-934429
	I0830 21:38:12.013165  990319 cli_runner.go:164] Run: docker run --rm --name addons-934429-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-934429 --entrypoint /usr/bin/test -v addons-934429:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0830 21:38:13.907446  990319 cli_runner.go:217] Completed: docker run --rm --name addons-934429-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-934429 --entrypoint /usr/bin/test -v addons-934429:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib: (1.894238726s)
	I0830 21:38:13.907476  990319 oci.go:107] Successfully prepared a docker volume addons-934429
	I0830 21:38:13.907497  990319 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:38:13.907515  990319 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 21:38:13.907601  990319 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-934429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 21:38:18.093918  990319 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-934429:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.186275744s)
	I0830 21:38:18.093954  990319 kic.go:199] duration metric: took 4.186434 seconds to extract preloaded images to volume
	W0830 21:38:18.094118  990319 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 21:38:18.094231  990319 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 21:38:18.165159  990319 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-934429 --name addons-934429 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-934429 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-934429 --network addons-934429 --ip 192.168.49.2 --volume addons-934429:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0830 21:38:18.507619  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Running}}
	I0830 21:38:18.534938  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:18.559004  990319 cli_runner.go:164] Run: docker exec addons-934429 stat /var/lib/dpkg/alternatives/iptables
	I0830 21:38:18.629490  990319 oci.go:144] the created container "addons-934429" has a running status.
	I0830 21:38:18.629516  990319 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa...
	I0830 21:38:18.768050  990319 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 21:38:18.792953  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:18.815747  990319 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 21:38:18.815771  990319 kic_runner.go:114] Args: [docker exec --privileged addons-934429 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 21:38:18.902147  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:18.932745  990319 machine.go:88] provisioning docker machine ...
	I0830 21:38:18.932773  990319 ubuntu.go:169] provisioning hostname "addons-934429"
	I0830 21:38:18.932985  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:18.954103  990319 main.go:141] libmachine: Using SSH client type: native
	I0830 21:38:18.954555  990319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34013 <nil> <nil>}
	I0830 21:38:18.954569  990319 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-934429 && echo "addons-934429" | sudo tee /etc/hostname
	I0830 21:38:18.955113  990319 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52952->127.0.0.1:34013: read: connection reset by peer
	I0830 21:38:22.115952  990319 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-934429
	
	I0830 21:38:22.116033  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:22.134311  990319 main.go:141] libmachine: Using SSH client type: native
	I0830 21:38:22.134755  990319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34013 <nil> <nil>}
	I0830 21:38:22.134777  990319 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-934429' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-934429/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-934429' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:38:22.270102  990319 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:38:22.270131  990319 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 21:38:22.270156  990319 ubuntu.go:177] setting up certificates
	I0830 21:38:22.270164  990319 provision.go:83] configureAuth start
	I0830 21:38:22.270225  990319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-934429
	I0830 21:38:22.290606  990319 provision.go:138] copyHostCerts
	I0830 21:38:22.290684  990319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 21:38:22.290800  990319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 21:38:22.290865  990319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 21:38:22.290914  990319 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.addons-934429 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-934429]
	I0830 21:38:22.716264  990319 provision.go:172] copyRemoteCerts
	I0830 21:38:22.716331  990319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:38:22.716373  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:22.733397  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:22.835696  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 21:38:22.863986  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 21:38:22.891943  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 21:38:22.919548  990319 provision.go:86] duration metric: configureAuth took 649.366729ms
	I0830 21:38:22.919582  990319 ubuntu.go:193] setting minikube options for container-runtime
	I0830 21:38:22.919770  990319 config.go:182] Loaded profile config "addons-934429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:38:22.919880  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:22.937040  990319 main.go:141] libmachine: Using SSH client type: native
	I0830 21:38:22.937686  990319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34013 <nil> <nil>}
	I0830 21:38:22.937710  990319 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:38:23.196777  990319 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:38:23.196799  990319 machine.go:91] provisioned docker machine in 4.264036187s
	I0830 21:38:23.196808  990319 client.go:171] LocalClient.Create took 14.115162167s
	I0830 21:38:23.196823  990319 start.go:167] duration metric: libmachine.API.Create for "addons-934429" took 14.115219939s
	I0830 21:38:23.196834  990319 start.go:300] post-start starting for "addons-934429" (driver="docker")
	I0830 21:38:23.196844  990319 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:38:23.196918  990319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:38:23.196971  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:23.215298  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:23.316139  990319 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:38:23.320273  990319 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 21:38:23.320320  990319 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 21:38:23.320351  990319 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 21:38:23.320365  990319 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 21:38:23.320375  990319 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 21:38:23.320456  990319 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 21:38:23.320483  990319 start.go:303] post-start completed in 123.642511ms
	I0830 21:38:23.320798  990319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-934429
	I0830 21:38:23.339064  990319 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/config.json ...
	I0830 21:38:23.339373  990319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 21:38:23.339422  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:23.359224  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:23.455309  990319 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 21:38:23.460851  990319 start.go:128] duration metric: createHost completed in 14.382036125s
	I0830 21:38:23.460874  990319 start.go:83] releasing machines lock for "addons-934429", held for 14.382176671s
	I0830 21:38:23.460948  990319 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-934429
	I0830 21:38:23.478047  990319 ssh_runner.go:195] Run: cat /version.json
	I0830 21:38:23.478098  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:23.478099  990319 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:38:23.478164  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:23.496947  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:23.497467  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:23.589485  990319 ssh_runner.go:195] Run: systemctl --version
	I0830 21:38:23.729343  990319 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:38:23.876058  990319 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:38:23.881920  990319 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:38:23.906330  990319 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 21:38:23.906405  990319 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:38:23.945402  990319 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0830 21:38:23.945462  990319 start.go:466] detecting cgroup driver to use...
	I0830 21:38:23.945505  990319 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 21:38:23.945573  990319 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:38:23.964004  990319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:38:23.978023  990319 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:38:23.978118  990319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:38:23.993695  990319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:38:24.011214  990319 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:38:24.118721  990319 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:38:24.219239  990319 docker.go:212] disabling docker service ...
	I0830 21:38:24.219349  990319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:38:24.242689  990319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:38:24.257795  990319 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:38:24.366136  990319 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:38:24.476879  990319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:38:24.490390  990319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:38:24.510639  990319 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:38:24.510706  990319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:38:24.522901  990319 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:38:24.522966  990319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:38:24.535646  990319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:38:24.547649  990319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:38:24.559086  990319 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:38:24.569667  990319 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:38:24.579628  990319 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:38:24.589836  990319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:38:24.680013  990319 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:38:24.815226  990319 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:38:24.815357  990319 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:38:24.820088  990319 start.go:534] Will wait 60s for crictl version
	I0830 21:38:24.820152  990319 ssh_runner.go:195] Run: which crictl
	I0830 21:38:24.824749  990319 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:38:24.870495  990319 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0830 21:38:24.870634  990319 ssh_runner.go:195] Run: crio --version
	I0830 21:38:24.917685  990319 ssh_runner.go:195] Run: crio --version
	I0830 21:38:24.970382  990319 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0830 21:38:24.972629  990319 cli_runner.go:164] Run: docker network inspect addons-934429 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 21:38:24.990437  990319 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0830 21:38:24.995069  990319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:38:25.009813  990319 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:38:25.009885  990319 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:38:25.082972  990319 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:38:25.082996  990319 crio.go:415] Images already preloaded, skipping extraction
	I0830 21:38:25.083054  990319 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:38:25.125430  990319 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:38:25.125452  990319 cache_images.go:84] Images are preloaded, skipping loading
	I0830 21:38:25.125526  990319 ssh_runner.go:195] Run: crio config
	I0830 21:38:25.186281  990319 cni.go:84] Creating CNI manager for ""
	I0830 21:38:25.186340  990319 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:38:25.186376  990319 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:38:25.186398  990319 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-934429 NodeName:addons-934429 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:38:25.186555  990319 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-934429"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:38:25.186629  990319 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-934429 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-934429 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:38:25.186694  990319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:38:25.197011  990319 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:38:25.197082  990319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:38:25.206954  990319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0830 21:38:25.227038  990319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:38:25.247330  990319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0830 21:38:25.267437  990319 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0830 21:38:25.271819  990319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:38:25.284780  990319 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429 for IP: 192.168.49.2
	I0830 21:38:25.284808  990319 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1c893f087ee62e9f919bfa6a6de84891ee8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:25.284930  990319 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key
	I0830 21:38:26.424442  990319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt ...
	I0830 21:38:26.424473  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt: {Name:mka04d01775233b755e13c6ca2c7618be14dc5a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:26.425262  990319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key ...
	I0830 21:38:26.425280  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key: {Name:mke8bc0bde89684699283c92f9c3840e8d6cbdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:26.425379  990319 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key
	I0830 21:38:26.632889  990319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt ...
	I0830 21:38:26.632917  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt: {Name:mkf5b156fa6e0b301da1807f4055803ad6b1b80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:26.633092  990319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key ...
	I0830 21:38:26.633104  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key: {Name:mk941f07276e5836dc99a47d2147b7367f65a84c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:26.633684  990319 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.key
	I0830 21:38:26.633705  990319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt with IP's: []
	I0830 21:38:27.374005  990319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt ...
	I0830 21:38:27.374037  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: {Name:mk141ec6f6b919e3c2467602d8e346f346db284b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.374231  990319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.key ...
	I0830 21:38:27.374245  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.key: {Name:mkea86690b25906896c24464aa597ab062e0e170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.374743  990319 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key.dd3b5fb2
	I0830 21:38:27.374767  990319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 21:38:27.590409  990319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt.dd3b5fb2 ...
	I0830 21:38:27.590445  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt.dd3b5fb2: {Name:mk197bb0034a96a9e931e3720b133136c5216c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.591119  990319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key.dd3b5fb2 ...
	I0830 21:38:27.591136  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key.dd3b5fb2: {Name:mkb5cf5e1b7d258b3fe5d3dac692b14f16ee4992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.591235  990319 certs.go:337] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt
	I0830 21:38:27.591310  990319 certs.go:341] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key
	I0830 21:38:27.591366  990319 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.key
	I0830 21:38:27.591385  990319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.crt with IP's: []
	I0830 21:38:27.814300  990319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.crt ...
	I0830 21:38:27.814333  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.crt: {Name:mk9c8437f816886c60f32512a796100065676c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.815025  990319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.key ...
	I0830 21:38:27.815047  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.key: {Name:mk1506f6049ea17b4e968fb1e298f3b650c47582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:27.815675  990319 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 21:38:27.815721  990319 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem (1082 bytes)
	I0830 21:38:27.815750  990319 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:38:27.815788  990319 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem (1679 bytes)
	I0830 21:38:27.816462  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:38:27.847139  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0830 21:38:27.876113  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:38:27.904530  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:38:27.932605  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:38:27.961646  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:38:27.989538  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:38:28.019720  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 21:38:28.049437  990319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:38:28.078988  990319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:38:28.100569  990319 ssh_runner.go:195] Run: openssl version
	I0830 21:38:28.107544  990319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:38:28.118997  990319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:38:28.123819  990319 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:38:28.123881  990319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:38:28.132263  990319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:38:28.143626  990319 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:38:28.148034  990319 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:38:28.148098  990319 kubeadm.go:404] StartCluster: {Name:addons-934429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-934429 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:38:28.148195  990319 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:38:28.148255  990319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:38:28.191401  990319 cri.go:89] found id: ""
	I0830 21:38:28.191522  990319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:38:28.202278  990319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:38:28.213050  990319 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0830 21:38:28.213185  990319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:38:28.223740  990319 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:38:28.223815  990319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0830 21:38:28.325120  990319 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 21:38:28.407422  990319 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:38:44.295962  990319 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 21:38:44.296019  990319 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 21:38:44.296112  990319 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0830 21:38:44.296169  990319 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1043-aws
	I0830 21:38:44.296205  990319 kubeadm.go:322] OS: Linux
	I0830 21:38:44.296251  990319 kubeadm.go:322] CGROUPS_CPU: enabled
	I0830 21:38:44.296299  990319 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0830 21:38:44.296347  990319 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0830 21:38:44.296395  990319 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0830 21:38:44.296443  990319 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0830 21:38:44.296492  990319 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0830 21:38:44.296538  990319 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0830 21:38:44.296586  990319 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0830 21:38:44.296632  990319 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0830 21:38:44.296702  990319 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:38:44.296793  990319 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:38:44.296881  990319 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:38:44.296943  990319 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:38:44.298950  990319 out.go:204]   - Generating certificates and keys ...
	I0830 21:38:44.299049  990319 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 21:38:44.299117  990319 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 21:38:44.299184  990319 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:38:44.299243  990319 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:38:44.299310  990319 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 21:38:44.299360  990319 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 21:38:44.299414  990319 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 21:38:44.299526  990319 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-934429 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 21:38:44.299578  990319 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 21:38:44.299687  990319 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-934429 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 21:38:44.299751  990319 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:38:44.299813  990319 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:38:44.299858  990319 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 21:38:44.299914  990319 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:38:44.299964  990319 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:38:44.300014  990319 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:38:44.300077  990319 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:38:44.300136  990319 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:38:44.300216  990319 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:38:44.300284  990319 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:38:44.303486  990319 out.go:204]   - Booting up control plane ...
	I0830 21:38:44.303616  990319 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:38:44.303704  990319 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:38:44.303770  990319 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:38:44.303873  990319 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:38:44.303958  990319 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:38:44.303998  990319 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 21:38:44.304150  990319 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:38:44.304225  990319 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002096 seconds
	I0830 21:38:44.304331  990319 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:38:44.304455  990319 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:38:44.304512  990319 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:38:44.304692  990319 kubeadm.go:322] [mark-control-plane] Marking the node addons-934429 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 21:38:44.304752  990319 kubeadm.go:322] [bootstrap-token] Using token: avxucg.433xuhe656n6ip00
	I0830 21:38:44.306352  990319 out.go:204]   - Configuring RBAC rules ...
	I0830 21:38:44.306471  990319 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:38:44.306555  990319 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:38:44.306695  990319 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:38:44.306821  990319 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:38:44.306935  990319 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:38:44.307037  990319 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:38:44.307154  990319 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:38:44.307197  990319 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 21:38:44.307243  990319 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 21:38:44.307247  990319 kubeadm.go:322] 
	I0830 21:38:44.307317  990319 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 21:38:44.307322  990319 kubeadm.go:322] 
	I0830 21:38:44.307398  990319 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 21:38:44.307402  990319 kubeadm.go:322] 
	I0830 21:38:44.307428  990319 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 21:38:44.307487  990319 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:38:44.307537  990319 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:38:44.307541  990319 kubeadm.go:322] 
	I0830 21:38:44.307595  990319 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 21:38:44.307599  990319 kubeadm.go:322] 
	I0830 21:38:44.307652  990319 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 21:38:44.307657  990319 kubeadm.go:322] 
	I0830 21:38:44.307709  990319 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 21:38:44.307783  990319 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:38:44.307851  990319 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:38:44.307856  990319 kubeadm.go:322] 
	I0830 21:38:44.307940  990319 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:38:44.308017  990319 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 21:38:44.308021  990319 kubeadm.go:322] 
	I0830 21:38:44.308105  990319 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token avxucg.433xuhe656n6ip00 \
	I0830 21:38:44.308208  990319 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 \
	I0830 21:38:44.308228  990319 kubeadm.go:322] 	--control-plane 
	I0830 21:38:44.308233  990319 kubeadm.go:322] 
	I0830 21:38:44.308318  990319 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:38:44.308322  990319 kubeadm.go:322] 
	I0830 21:38:44.308405  990319 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token avxucg.433xuhe656n6ip00 \
	I0830 21:38:44.308515  990319 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 
	I0830 21:38:44.308522  990319 cni.go:84] Creating CNI manager for ""
	I0830 21:38:44.308528  990319 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:38:44.310625  990319 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 21:38:44.312571  990319 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:38:44.327349  990319 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:38:44.327367  990319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:38:44.384598  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:38:45.345449  990319 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:38:45.345584  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:45.345659  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588 minikube.k8s.io/name=addons-934429 minikube.k8s.io/updated_at=2023_08_30T21_38_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:45.545365  990319 ops.go:34] apiserver oom_adj: -16
	I0830 21:38:45.545481  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:45.641355  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:46.237535  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:46.737651  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:47.237277  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:47.736967  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:48.237763  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:48.737254  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:49.237532  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:49.737653  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:50.237851  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:50.737906  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:51.237599  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:51.737638  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:52.237232  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:52.737368  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:53.237635  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:53.737733  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:54.237152  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:54.737457  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:55.237588  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:55.736915  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:56.237558  990319 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:38:56.340577  990319 kubeadm.go:1081] duration metric: took 10.995042076s to wait for elevateKubeSystemPrivileges.
	I0830 21:38:56.340600  990319 kubeadm.go:406] StartCluster complete in 28.192524121s
	I0830 21:38:56.340615  990319 settings.go:142] acquiring lock: {Name:mkc3addaaa213f1dd8b8b58d94d3f946bbcb1099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:56.340746  990319 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:38:56.341184  990319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/kubeconfig: {Name:mk735c90eaee551cc7c6cf5c5ad3cfbf98dfe457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:38:56.341363  990319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:38:56.341655  990319 config.go:182] Loaded profile config "addons-934429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:38:56.341764  990319 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0830 21:38:56.341841  990319 addons.go:69] Setting volumesnapshots=true in profile "addons-934429"
	I0830 21:38:56.341859  990319 addons.go:231] Setting addon volumesnapshots=true in "addons-934429"
	I0830 21:38:56.341912  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.342375  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.343460  990319 addons.go:69] Setting cloud-spanner=true in profile "addons-934429"
	I0830 21:38:56.343488  990319 addons.go:231] Setting addon cloud-spanner=true in "addons-934429"
	I0830 21:38:56.343523  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.343854  990319 addons.go:69] Setting inspektor-gadget=true in profile "addons-934429"
	I0830 21:38:56.343887  990319 addons.go:231] Setting addon inspektor-gadget=true in "addons-934429"
	I0830 21:38:56.343935  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.343966  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.344427  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.344555  990319 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-934429"
	I0830 21:38:56.344603  990319 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-934429"
	I0830 21:38:56.344661  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.345115  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.346580  990319 addons.go:69] Setting default-storageclass=true in profile "addons-934429"
	I0830 21:38:56.346619  990319 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-934429"
	I0830 21:38:56.346961  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.347069  990319 addons.go:69] Setting gcp-auth=true in profile "addons-934429"
	I0830 21:38:56.347115  990319 mustload.go:65] Loading cluster: addons-934429
	I0830 21:38:56.347310  990319 config.go:182] Loaded profile config "addons-934429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:38:56.347602  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.347705  990319 addons.go:69] Setting ingress=true in profile "addons-934429"
	I0830 21:38:56.347743  990319 addons.go:231] Setting addon ingress=true in "addons-934429"
	I0830 21:38:56.347812  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.348245  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.348355  990319 addons.go:69] Setting ingress-dns=true in profile "addons-934429"
	I0830 21:38:56.348384  990319 addons.go:231] Setting addon ingress-dns=true in "addons-934429"
	I0830 21:38:56.348442  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.348877  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.348992  990319 addons.go:69] Setting storage-provisioner=true in profile "addons-934429"
	I0830 21:38:56.349024  990319 addons.go:231] Setting addon storage-provisioner=true in "addons-934429"
	I0830 21:38:56.349065  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.353298  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.353475  990319 addons.go:69] Setting metrics-server=true in profile "addons-934429"
	I0830 21:38:56.353511  990319 addons.go:231] Setting addon metrics-server=true in "addons-934429"
	I0830 21:38:56.353565  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.354041  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.365204  990319 addons.go:69] Setting registry=true in profile "addons-934429"
	I0830 21:38:56.365235  990319 addons.go:231] Setting addon registry=true in "addons-934429"
	I0830 21:38:56.365284  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.365715  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.468925  990319 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0830 21:38:56.477015  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0830 21:38:56.513231  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0830 21:38:56.513308  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0830 21:38:56.513414  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.499509  990319 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0830 21:38:56.513570  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0830 21:38:56.513632  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.524890  990319 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0830 21:38:56.535429  990319 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0830 21:38:56.535451  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0830 21:38:56.535515  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.542426  990319 out.go:177]   - Using image docker.io/registry:2.8.1
	I0830 21:38:56.550006  990319 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0830 21:38:56.553288  990319 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0830 21:38:56.553311  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0830 21:38:56.553381  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.596784  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0830 21:38:56.595436  990319 addons.go:231] Setting addon default-storageclass=true in "addons-934429"
	I0830 21:38:56.598647  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.599123  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:38:56.624037  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0830 21:38:56.625964  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0830 21:38:56.629940  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0830 21:38:56.633177  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0830 21:38:56.635080  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0830 21:38:56.630079  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:38:56.640827  990319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 21:38:56.645299  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0830 21:38:56.646944  990319 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0830 21:38:56.648917  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0830 21:38:56.648943  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0830 21:38:56.649019  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.680060  990319 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0830 21:38:56.688748  990319 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 21:38:56.688773  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0830 21:38:56.688843  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.698596  990319 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:38:56.700834  990319 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:38:56.703251  990319 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0830 21:38:56.708864  990319 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 21:38:56.708886  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0830 21:38:56.708946  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.715970  990319 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0830 21:38:56.718200  990319 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 21:38:56.718221  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 21:38:56.718285  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.728763  990319 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:38:56.730898  990319 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:38:56.730923  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:38:56.730992  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.780675  990319 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-934429" context rescaled to 1 replicas
	I0830 21:38:56.780711  990319 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:38:56.782997  990319 out.go:177] * Verifying Kubernetes components...
	I0830 21:38:56.785193  990319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:38:56.791038  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.818247  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.826463  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.827183  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.836165  990319 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:38:56.836185  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:38:56.836244  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:38:56.889282  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.921113  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.928132  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.929114  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.929912  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:56.945142  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:38:57.159950  990319 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0830 21:38:57.159972  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0830 21:38:57.196447  990319 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0830 21:38:57.196503  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0830 21:38:57.270689  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0830 21:38:57.287168  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 21:38:57.299209  990319 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0830 21:38:57.299231  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0830 21:38:57.321592  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:38:57.325284  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0830 21:38:57.325308  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0830 21:38:57.340064  990319 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0830 21:38:57.340086  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0830 21:38:57.348831  990319 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0830 21:38:57.348853  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0830 21:38:57.364542  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 21:38:57.372297  990319 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 21:38:57.372363  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0830 21:38:57.407738  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:38:57.443972  990319 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0830 21:38:57.444038  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0830 21:38:57.480805  990319 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 21:38:57.480873  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 21:38:57.483487  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0830 21:38:57.483551  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0830 21:38:57.501109  990319 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0830 21:38:57.501188  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0830 21:38:57.526585  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0830 21:38:57.639319  990319 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0830 21:38:57.639394  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0830 21:38:57.643474  990319 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 21:38:57.643540  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 21:38:57.649341  990319 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0830 21:38:57.649404  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0830 21:38:57.652992  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0830 21:38:57.653067  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0830 21:38:57.766844  990319 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0830 21:38:57.766901  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0830 21:38:57.787763  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 21:38:57.804045  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0830 21:38:57.804112  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0830 21:38:57.841922  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0830 21:38:57.841987  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0830 21:38:57.877738  990319 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:38:57.877805  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0830 21:38:57.903171  990319 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0830 21:38:57.903235  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0830 21:38:57.957648  990319 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0830 21:38:57.957726  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0830 21:38:58.012675  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:38:58.139538  990319 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 21:38:58.139605  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0830 21:38:58.149236  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0830 21:38:58.149293  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0830 21:38:58.244360  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 21:38:58.260445  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0830 21:38:58.260512  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0830 21:38:58.371140  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0830 21:38:58.371165  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0830 21:38:58.521461  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0830 21:38:58.521485  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0830 21:38:58.626766  990319 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 21:38:58.626792  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0830 21:38:58.736408  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 21:38:59.082558  990319 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.440435054s)
	I0830 21:38:59.082587  990319 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0830 21:38:59.082612  990319 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.297401261s)
	I0830 21:38:59.083450  990319 node_ready.go:35] waiting up to 6m0s for node "addons-934429" to be "Ready" ...
	I0830 21:39:00.717699  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.446975823s)
	I0830 21:39:00.858003  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.570801331s)
	I0830 21:39:00.858121  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.536503651s)
	I0830 21:39:01.256215  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:01.915011  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.550398917s)
	I0830 21:39:01.915045  990319 addons.go:467] Verifying addon ingress=true in "addons-934429"
	I0830 21:39:01.917446  990319 out.go:177] * Verifying ingress addon...
	I0830 21:39:01.915254  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.507448231s)
	I0830 21:39:01.915288  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.388628877s)
	I0830 21:39:01.915341  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.127511531s)
	I0830 21:39:01.915459  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.902707738s)
	I0830 21:39:01.915555  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.671128308s)
	I0830 21:39:01.920074  990319 addons.go:467] Verifying addon registry=true in "addons-934429"
	I0830 21:39:01.922133  990319 out.go:177] * Verifying registry addon...
	I0830 21:39:01.920816  990319 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0830 21:39:01.920838  990319 addons.go:467] Verifying addon metrics-server=true in "addons-934429"
	W0830 21:39:01.920860  990319 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 21:39:01.924750  990319 retry.go:31] will retry after 296.071042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 21:39:01.925589  990319 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0830 21:39:01.934275  990319 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0830 21:39:01.934346  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:01.935109  990319 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0830 21:39:01.935128  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:01.946310  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:01.947094  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:02.221319  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 21:39:02.246796  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.510319934s)
	I0830 21:39:02.246877  990319 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-934429"
	I0830 21:39:02.249785  990319 out.go:177] * Verifying csi-hostpath-driver addon...
	I0830 21:39:02.253340  990319 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0830 21:39:02.287109  990319 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0830 21:39:02.287180  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:02.298982  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:02.466863  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:02.467311  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:02.803875  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:02.955017  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:02.958223  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:03.304596  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:03.479468  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:03.480655  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:03.695715  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:03.741779  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.520366546s)
	I0830 21:39:03.804203  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:03.955154  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:03.955760  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:04.309011  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:04.453914  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:04.454301  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:04.746295  990319 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0830 21:39:04.746437  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:39:04.783937  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:39:04.803969  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:04.953249  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:04.953649  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:04.955345  990319 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0830 21:39:05.019373  990319 addons.go:231] Setting addon gcp-auth=true in "addons-934429"
	I0830 21:39:05.019483  990319 host.go:66] Checking if "addons-934429" exists ...
	I0830 21:39:05.020056  990319 cli_runner.go:164] Run: docker container inspect addons-934429 --format={{.State.Status}}
	I0830 21:39:05.064086  990319 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0830 21:39:05.064141  990319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-934429
	I0830 21:39:05.088322  990319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34013 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/addons-934429/id_rsa Username:docker}
	I0830 21:39:05.203373  990319 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 21:39:05.205649  990319 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0830 21:39:05.208105  990319 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0830 21:39:05.208128  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0830 21:39:05.238218  990319 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0830 21:39:05.238342  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0830 21:39:05.271234  990319 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 21:39:05.271293  990319 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0830 21:39:05.298998  990319 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 21:39:05.305803  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:05.451142  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:05.459143  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:05.803941  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:05.954022  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:05.955226  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:06.186174  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:06.315609  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:06.477438  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:06.478268  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:06.624714  990319 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.325618524s)
	I0830 21:39:06.626204  990319 addons.go:467] Verifying addon gcp-auth=true in "addons-934429"
	I0830 21:39:06.629037  990319 out.go:177] * Verifying gcp-auth addon...
	I0830 21:39:06.633483  990319 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0830 21:39:06.663714  990319 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0830 21:39:06.663778  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:06.671717  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:06.804826  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:06.954814  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:06.955626  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:07.175710  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:07.303849  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:07.453873  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:07.454213  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:07.676383  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:07.804309  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:07.952299  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:07.955441  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:08.178050  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:08.204813  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:08.305191  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:08.453165  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:08.454922  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:08.676055  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:08.803880  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:08.953372  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:08.954457  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:09.175930  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:09.304097  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:09.454021  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:09.454993  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:09.675911  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:09.805063  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:09.952112  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:09.956021  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:10.177110  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:10.303847  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:10.453289  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:10.454946  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:10.676184  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:10.686512  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:10.804787  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:10.954003  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:10.955074  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:11.177700  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:11.322823  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:11.454284  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:11.465251  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:11.676836  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:11.804572  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:11.953354  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:11.963731  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:12.175596  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:12.303984  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:12.455875  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:12.457237  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:12.678379  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:12.687535  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:12.804174  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:12.952847  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:12.953843  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:13.176762  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:13.304939  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:13.453146  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:13.454270  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:13.675804  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:13.804193  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:13.952383  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:13.953961  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:14.175889  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:14.303986  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:14.453899  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:14.455284  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:14.675630  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:14.805934  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:14.951931  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:14.953980  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:15.175965  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:15.185272  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:15.303770  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:15.451316  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:15.452144  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:15.675553  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:15.803696  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:15.950181  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:15.951887  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:16.175661  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:16.304222  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:16.451307  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:16.451872  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:16.676051  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:16.803804  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:16.950247  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:16.951620  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:17.175995  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:17.185794  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:17.303541  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:17.451955  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:17.452352  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:17.676096  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:17.803315  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:17.951287  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:17.951505  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:18.176390  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:18.303966  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:18.451891  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:18.452627  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:18.675912  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:18.803439  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:18.950538  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:18.951984  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:19.185327  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:19.193811  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:19.304319  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:19.452738  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:19.454216  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:19.675851  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:19.803905  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:19.951526  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:19.952215  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:20.176276  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:20.303451  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:20.450874  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:20.451807  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:20.675795  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:20.804212  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:20.950222  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:20.951457  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:21.176190  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:21.303860  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:21.451186  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:21.452004  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:21.675492  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:21.686216  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:21.803816  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:21.951467  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:21.952136  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:22.176038  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:22.303623  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:22.451320  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:22.451661  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:22.675961  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:22.803340  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:22.951473  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:22.952436  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:23.175549  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:23.304340  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:23.451663  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:23.452781  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:23.676164  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:23.804330  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:23.951586  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:23.951779  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:24.175236  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:24.186156  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:24.303617  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:24.453525  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:24.453809  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:24.675694  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:24.803510  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:24.951317  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:24.952798  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:25.176489  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:25.303587  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:25.451018  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:25.452255  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:25.676049  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:25.803368  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:25.950668  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:25.951576  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:26.175492  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:26.307810  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:26.451014  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:26.452057  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:26.676140  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:26.685833  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:26.804799  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:26.951303  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:26.952110  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:27.176077  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:27.303615  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:27.451547  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:27.452315  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:27.676197  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:27.804093  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:27.951720  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:27.952057  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:28.175724  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:28.303275  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:28.451266  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:28.452011  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:28.675400  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:28.685927  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:28.803678  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:28.951293  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:28.951883  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:29.175998  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:29.304186  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:29.451345  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:29.451943  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:29.676050  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:29.804484  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:29.952183  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:29.952785  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:30.175680  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:30.303537  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:30.451888  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:30.453082  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:30.675391  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:30.686192  990319 node_ready.go:58] node "addons-934429" has status "Ready":"False"
	I0830 21:39:30.803501  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:30.971124  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:30.971924  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:31.178724  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:31.188251  990319 node_ready.go:49] node "addons-934429" has status "Ready":"True"
	I0830 21:39:31.188318  990319 node_ready.go:38] duration metric: took 32.104840036s waiting for node "addons-934429" to be "Ready" ...
	I0830 21:39:31.188340  990319 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:39:31.199929  990319 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rnjvd" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:31.338952  990319 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0830 21:39:31.339007  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:31.453312  990319 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0830 21:39:31.453337  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:31.455404  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:31.684071  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:31.819088  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:31.989681  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:31.991139  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:32.199402  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:32.305772  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:32.453534  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:32.455467  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:32.676298  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:32.807445  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:32.955010  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:32.956343  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:33.176177  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:33.306364  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:33.376657  990319 pod_ready.go:92] pod "coredns-5dd5756b68-rnjvd" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.376724  990319 pod_ready.go:81] duration metric: took 2.176719317s waiting for pod "coredns-5dd5756b68-rnjvd" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.376763  990319 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.392522  990319 pod_ready.go:92] pod "etcd-addons-934429" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.392547  990319 pod_ready.go:81] duration metric: took 15.764018ms waiting for pod "etcd-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.392563  990319 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.410003  990319 pod_ready.go:92] pod "kube-apiserver-addons-934429" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.410024  990319 pod_ready.go:81] duration metric: took 17.422042ms waiting for pod "kube-apiserver-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.410035  990319 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.419482  990319 pod_ready.go:92] pod "kube-controller-manager-addons-934429" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.419549  990319 pod_ready.go:81] duration metric: took 9.505583ms waiting for pod "kube-controller-manager-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.419579  990319 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w6q6w" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.455527  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:33.456844  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:33.586685  990319 pod_ready.go:92] pod "kube-proxy-w6q6w" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.586743  990319 pod_ready.go:81] duration metric: took 167.144923ms waiting for pod "kube-proxy-w6q6w" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.586778  990319 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.681785  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:33.825856  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:33.955412  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:33.956673  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:33.987427  990319 pod_ready.go:92] pod "kube-scheduler-addons-934429" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:33.987458  990319 pod_ready.go:81] duration metric: took 400.657809ms waiting for pod "kube-scheduler-addons-934429" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:33.987472  990319 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:34.175836  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:34.304713  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:34.451774  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:34.452591  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:34.676344  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:34.804856  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:34.951728  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:34.952604  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:35.176480  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:35.304586  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:35.453428  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:35.456281  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:35.676574  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:35.804868  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:35.952672  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:35.954124  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:36.176279  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:36.293469  990319 pod_ready.go:102] pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace has status "Ready":"False"
	I0830 21:39:36.304262  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:36.452559  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:36.453775  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:36.675747  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:36.804447  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:36.953809  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:36.955956  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:37.183390  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:37.307895  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:37.458184  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:37.459158  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:37.676005  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:37.805482  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:37.954173  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:37.955066  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:38.176432  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:38.294895  990319 pod_ready.go:102] pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace has status "Ready":"False"
	I0830 21:39:38.306721  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:38.455062  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:38.456349  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:38.676117  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:38.825497  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:38.954640  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:38.956420  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:39.184151  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:39.323091  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:39.467139  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:39.468869  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:39.691254  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:39.805261  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:39.954205  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:39.956593  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:40.179946  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:40.305390  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:40.454523  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:40.455319  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:40.676489  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:40.795568  990319 pod_ready.go:102] pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace has status "Ready":"False"
	I0830 21:39:40.810691  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:40.955030  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:40.959087  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:41.176397  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:41.309537  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:41.456257  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:41.457388  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:41.687194  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:41.821646  990319 pod_ready.go:92] pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace has status "Ready":"True"
	I0830 21:39:41.821677  990319 pod_ready.go:81] duration metric: took 7.834197s waiting for pod "metrics-server-7c66d45ddc-wjzsv" in "kube-system" namespace to be "Ready" ...
	I0830 21:39:41.821808  990319 pod_ready.go:38] duration metric: took 10.633445464s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:39:41.821828  990319 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:39:41.821884  990319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:39:41.824343  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:41.849836  990319 api_server.go:72] duration metric: took 45.069094597s to wait for apiserver process to appear ...
	I0830 21:39:41.849861  990319 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:39:41.849883  990319 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0830 21:39:41.860273  990319 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0830 21:39:41.864646  990319 api_server.go:141] control plane version: v1.28.1
	I0830 21:39:41.864744  990319 api_server.go:131] duration metric: took 14.867987ms to wait for apiserver health ...
	I0830 21:39:41.864767  990319 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:39:41.883991  990319 system_pods.go:59] 17 kube-system pods found
	I0830 21:39:41.885236  990319 system_pods.go:61] "coredns-5dd5756b68-rnjvd" [3997196f-5d30-40e3-9657-d2a63d34342e] Running
	I0830 21:39:41.885306  990319 system_pods.go:61] "csi-hostpath-attacher-0" [6ad0c02b-cbf2-4dfd-b91f-1405aa1b0ede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0830 21:39:41.885347  990319 system_pods.go:61] "csi-hostpath-resizer-0" [7bb34879-0a83-49ca-acdb-4864242bb9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0830 21:39:41.885384  990319 system_pods.go:61] "csi-hostpathplugin-fct6w" [83d5ea84-1211-463f-becc-ec0343970ac5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 21:39:41.885413  990319 system_pods.go:61] "etcd-addons-934429" [0b75b078-d8e9-4333-a1e3-ea538a7db196] Running
	I0830 21:39:41.885443  990319 system_pods.go:61] "kindnet-2pbbt" [afb5a3d8-7d25-489e-9a80-51e9d73e26ac] Running
	I0830 21:39:41.885471  990319 system_pods.go:61] "kube-apiserver-addons-934429" [d849187a-cffe-4a14-93d8-094f504bfa0a] Running
	I0830 21:39:41.885489  990319 system_pods.go:61] "kube-controller-manager-addons-934429" [95d7fddc-09a4-4d86-a4ca-5271edf663bb] Running
	I0830 21:39:41.885507  990319 system_pods.go:61] "kube-ingress-dns-minikube" [77abf737-6a2d-4313-a01a-8bebbde6bc58] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0830 21:39:41.885526  990319 system_pods.go:61] "kube-proxy-w6q6w" [6a87099b-0fdd-4fe9-928c-a7e7bbe4c3b8] Running
	I0830 21:39:41.885545  990319 system_pods.go:61] "kube-scheduler-addons-934429" [b00b6ca7-6ab4-40bf-bbb1-fdb351fa5041] Running
	I0830 21:39:41.885562  990319 system_pods.go:61] "metrics-server-7c66d45ddc-wjzsv" [92424fd5-04a2-4f7a-a4b6-c9fb99352034] Running
	I0830 21:39:41.885581  990319 system_pods.go:61] "registry-74x2w" [5c1a0a58-4b38-4d8c-b5da-9aa551fc0068] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0830 21:39:41.885601  990319 system_pods.go:61] "registry-proxy-4jnsf" [b2e2e34a-4172-482d-998a-c32144619319] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0830 21:39:41.885621  990319 system_pods.go:61] "snapshot-controller-58dbcc7b99-5gbnj" [013a6b4a-b439-4fad-82d3-5e072963ad01] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:39:41.885649  990319 system_pods.go:61] "snapshot-controller-58dbcc7b99-m28fv" [31cef0fd-a56c-48bb-b122-2e488a52c5e6] Running
	I0830 21:39:41.885667  990319 system_pods.go:61] "storage-provisioner" [0d71c659-65b2-4771-997c-22ec04e9506a] Running
	I0830 21:39:41.885685  990319 system_pods.go:74] duration metric: took 20.8916ms to wait for pod list to return data ...
	I0830 21:39:41.885704  990319 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:39:41.888276  990319 default_sa.go:45] found service account: "default"
	I0830 21:39:41.888295  990319 default_sa.go:55] duration metric: took 2.559905ms for default service account to be created ...
	I0830 21:39:41.888305  990319 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:39:41.900606  990319 system_pods.go:86] 17 kube-system pods found
	I0830 21:39:41.900644  990319 system_pods.go:89] "coredns-5dd5756b68-rnjvd" [3997196f-5d30-40e3-9657-d2a63d34342e] Running
	I0830 21:39:41.900657  990319 system_pods.go:89] "csi-hostpath-attacher-0" [6ad0c02b-cbf2-4dfd-b91f-1405aa1b0ede] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0830 21:39:41.900706  990319 system_pods.go:89] "csi-hostpath-resizer-0" [7bb34879-0a83-49ca-acdb-4864242bb9a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0830 21:39:41.900725  990319 system_pods.go:89] "csi-hostpathplugin-fct6w" [83d5ea84-1211-463f-becc-ec0343970ac5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 21:39:41.900731  990319 system_pods.go:89] "etcd-addons-934429" [0b75b078-d8e9-4333-a1e3-ea538a7db196] Running
	I0830 21:39:41.900737  990319 system_pods.go:89] "kindnet-2pbbt" [afb5a3d8-7d25-489e-9a80-51e9d73e26ac] Running
	I0830 21:39:41.900746  990319 system_pods.go:89] "kube-apiserver-addons-934429" [d849187a-cffe-4a14-93d8-094f504bfa0a] Running
	I0830 21:39:41.900754  990319 system_pods.go:89] "kube-controller-manager-addons-934429" [95d7fddc-09a4-4d86-a4ca-5271edf663bb] Running
	I0830 21:39:41.900762  990319 system_pods.go:89] "kube-ingress-dns-minikube" [77abf737-6a2d-4313-a01a-8bebbde6bc58] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0830 21:39:41.900787  990319 system_pods.go:89] "kube-proxy-w6q6w" [6a87099b-0fdd-4fe9-928c-a7e7bbe4c3b8] Running
	I0830 21:39:41.900793  990319 system_pods.go:89] "kube-scheduler-addons-934429" [b00b6ca7-6ab4-40bf-bbb1-fdb351fa5041] Running
	I0830 21:39:41.900812  990319 system_pods.go:89] "metrics-server-7c66d45ddc-wjzsv" [92424fd5-04a2-4f7a-a4b6-c9fb99352034] Running
	I0830 21:39:41.900828  990319 system_pods.go:89] "registry-74x2w" [5c1a0a58-4b38-4d8c-b5da-9aa551fc0068] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0830 21:39:41.900835  990319 system_pods.go:89] "registry-proxy-4jnsf" [b2e2e34a-4172-482d-998a-c32144619319] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0830 21:39:41.900846  990319 system_pods.go:89] "snapshot-controller-58dbcc7b99-5gbnj" [013a6b4a-b439-4fad-82d3-5e072963ad01] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 21:39:41.900852  990319 system_pods.go:89] "snapshot-controller-58dbcc7b99-m28fv" [31cef0fd-a56c-48bb-b122-2e488a52c5e6] Running
	I0830 21:39:41.900861  990319 system_pods.go:89] "storage-provisioner" [0d71c659-65b2-4771-997c-22ec04e9506a] Running
	I0830 21:39:41.900869  990319 system_pods.go:126] duration metric: took 12.558806ms to wait for k8s-apps to be running ...
	I0830 21:39:41.900892  990319 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:39:41.900965  990319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:39:41.939592  990319 system_svc.go:56] duration metric: took 38.668711ms WaitForService to wait for kubelet.
	I0830 21:39:41.939664  990319 kubeadm.go:581] duration metric: took 45.158928112s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:39:41.939697  990319 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:39:41.944782  990319 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 21:39:41.944855  990319 node_conditions.go:123] node cpu capacity is 2
	I0830 21:39:41.944881  990319 node_conditions.go:105] duration metric: took 5.167089ms to run NodePressure ...
	I0830 21:39:41.944904  990319 start.go:228] waiting for startup goroutines ...
	I0830 21:39:41.954833  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:41.954966  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:42.176723  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:42.306275  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:42.454867  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:42.456037  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:42.676054  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:42.805868  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:42.953521  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:42.970844  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:43.180870  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:43.307241  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:43.462483  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:43.463659  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:43.675922  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:43.805330  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:43.955349  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:43.956304  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:44.176488  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:44.311234  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:44.461872  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:44.466590  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:44.676977  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:44.805225  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:44.952037  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:44.952950  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:45.181230  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:45.307288  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:45.452641  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:45.453953  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:45.675878  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:45.805157  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:45.951825  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:45.954789  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:46.175574  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:46.305253  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:46.456433  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:46.457361  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:46.676503  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:46.808332  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:46.953102  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:46.955339  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:47.191023  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:47.304817  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:47.451878  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:47.453397  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:47.676291  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:47.804902  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:47.952130  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:47.953331  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:48.176082  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:48.304733  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:48.451487  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:48.453217  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:48.677121  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:48.805437  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:48.954727  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:48.956792  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:49.175978  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:49.305193  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:49.451900  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:49.453871  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:49.675355  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:49.806163  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:49.951954  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:49.953868  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:50.175637  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:50.305237  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:50.453665  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:50.455454  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:50.676103  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:50.806039  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:50.956752  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:50.957748  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:51.175972  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:51.305549  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:51.452423  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:51.457384  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:51.676600  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:51.805554  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:51.976613  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:51.980556  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:52.176226  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:52.305701  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:52.451505  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:52.453703  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:52.675374  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:52.807450  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:52.956791  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:52.957420  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:53.175607  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:53.306246  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:53.457018  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:53.458274  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:53.676230  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:53.821741  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:53.952188  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:53.953121  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:54.176066  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:54.305403  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:54.452147  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:54.453170  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:54.676112  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:54.805922  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:54.953846  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:54.957693  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:55.175859  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:55.308591  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:55.460171  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:55.461219  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:55.676674  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:55.805983  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:55.956452  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:55.957296  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:56.187090  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:56.304446  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:56.451713  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:56.453117  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:56.679257  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:56.804792  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:56.951189  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:56.953236  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:57.189104  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:57.314547  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:57.453630  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:57.454689  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:57.675604  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:57.805048  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:57.951807  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:57.953348  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:58.176542  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:58.305205  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:58.451680  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:58.452595  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:58.675978  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:58.804953  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:58.951337  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:58.953238  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:59.175963  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:59.322188  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:59.453101  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:59.454462  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:39:59.677227  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:39:59.806334  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:39:59.963574  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:39:59.964443  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:00.179733  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:00.348539  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:00.465596  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:00.471781  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:00.678096  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:00.820004  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:00.957246  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:00.959545  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:01.176438  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:01.306059  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:01.454803  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:01.456066  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:01.683377  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:01.806542  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:01.954981  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:01.957703  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:02.176434  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:02.306532  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:02.454627  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:02.455197  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:02.676887  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:02.804750  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:02.951538  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:02.953161  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 21:40:03.176484  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:03.305820  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:03.452383  990319 kapi.go:107] duration metric: took 1m1.526791187s to wait for kubernetes.io/minikube-addons=registry ...
	I0830 21:40:03.452589  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:03.676361  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:03.807345  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:03.951930  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:04.175542  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:04.312505  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:04.452200  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:04.677059  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:04.805577  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:04.952277  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:05.176698  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:05.304826  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:05.453785  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:05.676922  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:05.810493  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:05.951844  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:06.175831  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:06.309081  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:06.459229  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:06.677826  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:06.805555  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:06.951962  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:07.176162  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:07.307219  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:07.458833  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:07.680282  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:07.807505  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:07.952947  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:08.175982  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:08.305248  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:08.451808  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:08.675692  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:08.804724  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:08.951760  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:09.175530  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:09.305070  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:09.458711  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:09.691021  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:09.804656  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:09.952536  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:10.175428  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:10.305100  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:10.452484  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:10.676176  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:10.805398  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:10.952233  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:11.176231  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:11.305472  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:11.452577  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:11.675258  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:11.805353  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:11.952439  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:12.177081  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:12.306150  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:12.452593  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:12.677841  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:12.805830  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:12.952107  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:13.176763  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 21:40:13.305289  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:13.452515  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:13.676209  990319 kapi.go:107] duration metric: took 1m7.04272369s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0830 21:40:13.678918  990319 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-934429 cluster.
	I0830 21:40:13.681021  990319 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0830 21:40:13.683137  990319 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0830 21:40:13.805413  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:13.951760  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:14.305323  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:14.451492  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:14.805269  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:14.951767  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:15.305645  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:15.452907  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:15.805112  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:15.952276  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:16.306158  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:16.452640  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:16.808237  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:16.961096  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:17.305569  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:17.451733  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:17.808403  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:17.951863  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:18.305459  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:18.451694  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:18.805693  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:18.953607  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:19.304475  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:19.453149  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:19.808733  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:19.952209  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:20.304884  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:20.452409  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:20.806231  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:20.953246  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:21.305078  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:21.452408  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:21.806888  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:21.952717  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:22.305394  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:22.452266  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:22.805202  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:22.953077  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:23.304914  990319 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 21:40:23.451942  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:23.804662  990319 kapi.go:107] duration metric: took 1m21.551320947s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0830 21:40:23.953051  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:24.452434  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:24.952234  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:25.452161  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:25.951652  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:26.451720  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:26.951474  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:27.452697  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:27.951479  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:28.452380  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:28.952569  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:29.452431  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:29.952374  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:30.452345  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:30.955528  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:31.452730  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:31.952556  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:32.461214  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:32.952437  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:33.451996  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:33.951772  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:34.452031  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:34.951924  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:35.451620  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:35.951680  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:36.451925  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:36.952708  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:37.453611  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:37.951696  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:38.454019  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:38.953192  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:39.452290  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:39.952623  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:40.452558  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:40.951966  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:41.452641  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:41.952199  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:42.454043  990319 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 21:40:42.952958  990319 kapi.go:107] duration metric: took 1m41.032138686s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0830 21:40:42.955411  990319 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, inspektor-gadget, storage-provisioner, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0830 21:40:42.957960  990319 addons.go:502] enable addons completed in 1m46.616187701s: enabled=[cloud-spanner ingress-dns default-storageclass inspektor-gadget storage-provisioner metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0830 21:40:42.958039  990319 start.go:233] waiting for cluster config update ...
	I0830 21:40:42.958069  990319 start.go:242] writing updated cluster config ...
	I0830 21:40:42.958461  990319 ssh_runner.go:195] Run: rm -f paused
	I0830 21:40:43.048541  990319 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 21:40:43.052828  990319 out.go:177] * Done! kubectl is now configured to use "addons-934429" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.280483205Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8e19b2d8-4fea-4f71-91c3-23e1a9abae46 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.280667992Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8e19b2d8-4fea-4f71-91c3-23e1a9abae46 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.281420123Z" level=info msg="Creating container: default/hello-world-app-5d77478584-dpfkw/hello-world-app" id=c90598d5-54fa-43e5-afb0-0bc4dd619b0d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.281512036Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.380732827Z" level=info msg="Created container 2fd4dd84826177ebd05fabe76fa6b2146293677ac0df19e9a82b56efbcb667c4: default/hello-world-app-5d77478584-dpfkw/hello-world-app" id=c90598d5-54fa-43e5-afb0-0bc4dd619b0d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.381615854Z" level=info msg="Starting container: 2fd4dd84826177ebd05fabe76fa6b2146293677ac0df19e9a82b56efbcb667c4" id=4d736bc4-f155-4ae0-aca1-77666b3bf906 name=/runtime.v1.RuntimeService/StartContainer
	Aug 30 21:44:02 addons-934429 conmon[7896]: conmon 2fd4dd84826177ebd05f <ninfo>: container 7907 exited with status 1
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.403039938Z" level=info msg="Started container" PID=7907 containerID=2fd4dd84826177ebd05fabe76fa6b2146293677ac0df19e9a82b56efbcb667c4 description=default/hello-world-app-5d77478584-dpfkw/hello-world-app id=4d736bc4-f155-4ae0-aca1-77666b3bf906 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c9aecc9997c4d93f98c5050b41c6132135357df69edcf80991d32cda5dff319
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.583094215Z" level=info msg="Stopping container: 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732 (timeout: 2s)" id=9f1916f7-2d9b-49a0-a072-a197aca588ab name=/runtime.v1.RuntimeService/StopContainer
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.841286720Z" level=info msg="Removing container: e0188a4714997b2547e42a8f1949f3d3de76a0dabca6e4442971fd187d116e92" id=009495bb-619f-4eea-945b-1f4fdd66cf1f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 30 21:44:02 addons-934429 crio[894]: time="2023-08-30 21:44:02.865732757Z" level=info msg="Removed container e0188a4714997b2547e42a8f1949f3d3de76a0dabca6e4442971fd187d116e92: default/hello-world-app-5d77478584-dpfkw/hello-world-app" id=009495bb-619f-4eea-945b-1f4fdd66cf1f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.592612188Z" level=warning msg="Stopping container 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9f1916f7-2d9b-49a0-a072-a197aca588ab name=/runtime.v1.RuntimeService/StopContainer
	Aug 30 21:44:04 addons-934429 conmon[5221]: conmon 62a32468ef0ed11390e7 <ninfo>: container 5232 exited with status 137
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.746986452Z" level=info msg="Stopped container 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732: ingress-nginx/ingress-nginx-controller-5dcd45b5bf-dzgfs/controller" id=9f1916f7-2d9b-49a0-a072-a197aca588ab name=/runtime.v1.RuntimeService/StopContainer
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.747549249Z" level=info msg="Stopping pod sandbox: 50835a1d4c0b990c38909bc28571e00ee797320171636034dd959994d5ef5962" id=069c3be9-68b9-43ef-bae2-1d2a2fe1bf4b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.751125999Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-F4VDZPJRX5LMX5HN - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GO6XURJDXEIU7CKG - [0:0]\n-X KUBE-HP-GO6XURJDXEIU7CKG\n-X KUBE-HP-F4VDZPJRX5LMX5HN\nCOMMIT\n"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.758247942Z" level=info msg="Closing host port tcp:80"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.758302005Z" level=info msg="Closing host port tcp:443"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.759986991Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.760010442Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.760194761Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5dcd45b5bf-dzgfs Namespace:ingress-nginx ID:50835a1d4c0b990c38909bc28571e00ee797320171636034dd959994d5ef5962 UID:ef64dc5d-db5e-4efb-83e7-4dfe0634da00 NetNS:/var/run/netns/618476a5-6dd5-4aa4-8521-935c96c7df5a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.760336710Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5dcd45b5bf-dzgfs from CNI network \"kindnet\" (type=ptp)"
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.786888648Z" level=info msg="Stopped pod sandbox: 50835a1d4c0b990c38909bc28571e00ee797320171636034dd959994d5ef5962" id=069c3be9-68b9-43ef-bae2-1d2a2fe1bf4b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.848415371Z" level=info msg="Removing container: 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732" id=5d3405ce-88cd-42fe-b5c0-de6724d6f5aa name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 30 21:44:04 addons-934429 crio[894]: time="2023-08-30 21:44:04.867551877Z" level=info msg="Removed container 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732: ingress-nginx/ingress-nginx-controller-5dcd45b5bf-dzgfs/controller" id=5d3405ce-88cd-42fe-b5c0-de6724d6f5aa name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fd4dd8482617       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                             7 seconds ago       Exited              hello-world-app           2                   4c9aecc9997c4       hello-world-app-5d77478584-dpfkw
	b87e086e0418c       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   d507e62a1a6d0       nginx
	6e314bb10f14b       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                        3 minutes ago       Running             headlamp                  0                   863a498e64452       headlamp-699c48fb74-8bj69
	8d454a39ac3a7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   fe666cb44d71d       gcp-auth-d4c87556c-t6fg4
	441d3043b9670       8f2588812ab2947d53d2f99b11142e2be088330ec67837bb82801c0d3501af78                                                             4 minutes ago       Exited              patch                     1                   14afe6ee9794b       ingress-nginx-admission-patch-9zbp6
	90020c95ee8ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   4 minutes ago       Exited              create                    0                   b0a6d02148017       ingress-nginx-admission-create-8jbh6
	d2b400eb5cc4b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   d8a95784766d8       storage-provisioner
	db02b714c0510       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   6cf95055fc6bb       coredns-5dd5756b68-rnjvd
	7841b2729a7ef       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             5 minutes ago       Running             kindnet-cni               0                   61b76f1e9df76       kindnet-2pbbt
	c95255695e70b       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                                             5 minutes ago       Running             kube-proxy                0                   86fc0c14651b1       kube-proxy-w6q6w
	a3c5c6930ad87       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                                             5 minutes ago       Running             kube-scheduler            0                   9e213c1c146fb       kube-scheduler-addons-934429
	5feb513fdbebb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   87b1902453ff2       etcd-addons-934429
	230dd41f9d282       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                                             5 minutes ago       Running             kube-controller-manager   0                   1d771f8222c86       kube-controller-manager-addons-934429
	36e1b622df11d       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                                             5 minutes ago       Running             kube-apiserver            0                   4c171c18ead7e       kube-apiserver-addons-934429
	
	* 
	* ==> coredns [db02b714c0510215fa3cba1529e7688e3b90fbed7b48c440d47ea4ac163cc15e] <==
	* [INFO] 10.244.0.17:45828 - 59745 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078072s
	[INFO] 10.244.0.17:45828 - 8489 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000049485s
	[INFO] 10.244.0.17:45828 - 6397 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048656s
	[INFO] 10.244.0.17:45828 - 11007 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077522s
	[INFO] 10.244.0.17:45828 - 43410 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001060585s
	[INFO] 10.244.0.17:45828 - 46437 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000894038s
	[INFO] 10.244.0.17:45828 - 42561 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055007s
	[INFO] 10.244.0.17:45104 - 8822 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000111663s
	[INFO] 10.244.0.17:45104 - 59250 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068973s
	[INFO] 10.244.0.17:51317 - 2336 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00013257s
	[INFO] 10.244.0.17:45104 - 9631 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004036s
	[INFO] 10.244.0.17:45104 - 51433 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051675s
	[INFO] 10.244.0.17:51317 - 1813 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053596s
	[INFO] 10.244.0.17:45104 - 54572 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040542s
	[INFO] 10.244.0.17:51317 - 13047 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039943s
	[INFO] 10.244.0.17:45104 - 50877 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004398s
	[INFO] 10.244.0.17:51317 - 33751 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004114s
	[INFO] 10.244.0.17:51317 - 27983 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000359688s
	[INFO] 10.244.0.17:45104 - 54280 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00112776s
	[INFO] 10.244.0.17:51317 - 5820 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000133587s
	[INFO] 10.244.0.17:45104 - 29580 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000871933s
	[INFO] 10.244.0.17:45104 - 57911 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006912s
	[INFO] 10.244.0.17:51317 - 43312 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001540338s
	[INFO] 10.244.0.17:51317 - 7373 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000825082s
	[INFO] 10.244.0.17:51317 - 55148 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050708s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-934429
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-934429
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
	                    minikube.k8s.io/name=addons-934429
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_38_45_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-934429
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:38:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-934429
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:44:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:41:48 +0000   Wed, 30 Aug 2023 21:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:41:48 +0000   Wed, 30 Aug 2023 21:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:41:48 +0000   Wed, 30 Aug 2023 21:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:41:48 +0000   Wed, 30 Aug 2023 21:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-934429
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	System Info:
	  Machine ID:                 536b756487dd4b4cb990f27f6bf2655b
	  System UUID:                e4be39eb-a63c-44eb-a065-7a79f5ac404f
	  Boot ID:                    98673563-8173-4281-afb4-eac1dfafdc23
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-dpfkw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  gcp-auth                    gcp-auth-d4c87556c-t6fg4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  headlamp                    headlamp-699c48fb74-8bj69                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 coredns-5dd5756b68-rnjvd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m13s
	  kube-system                 etcd-addons-934429                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m29s
	  kube-system                 kindnet-2pbbt                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-addons-934429             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-controller-manager-addons-934429    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-proxy-w6q6w                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-addons-934429             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node addons-934429 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node addons-934429 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x8 over 5m34s)  kubelet          Node addons-934429 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m26s                  kubelet          Node addons-934429 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s                  kubelet          Node addons-934429 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s                  kubelet          Node addons-934429 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m15s                  node-controller  Node addons-934429 event: Registered Node addons-934429 in Controller
	  Normal  NodeReady                4m40s                  kubelet          Node addons-934429 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001065] FS-Cache: O-key=[8] 'fe3d5c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=000000006547550d
	[  +0.001035] FS-Cache: N-key=[8] 'fe3d5c0100000000'
	[  +2.727104] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=000000006d053276
	[  +0.001146] FS-Cache: O-key=[8] 'fd3d5c0100000000'
	[  +0.000716] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=0000000095e2f235
	[  +0.001052] FS-Cache: N-key=[8] 'fd3d5c0100000000'
	[  +0.378288] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001124] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000c68a6716
	[  +0.001196] FS-Cache: O-key=[8] '033e5c0100000000'
	[  +0.000794] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000ec866464
	[  +0.001128] FS-Cache: N-key=[8] '033e5c0100000000'
	[  +3.661121] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000ee697b3f{9P.session} n=000000007881e0f6
	[  +0.001093] FS-Cache: O-key=[10] '34323939363835373037'
	[  +0.000828] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000ee697b3f{9P.session} n=0000000088fb1c8a
	[  +0.001140] FS-Cache: N-key=[10] '34323939363835373037'
	
	* 
	* ==> etcd [5feb513fdbebb46bf1907d9bbd4bcf9e72897853b1b1523efe804fe96b7b71fd] <==
	* {"level":"info","ts":"2023-08-30T21:38:57.988314Z","caller":"traceutil/trace.go:171","msg":"trace[256221244] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"117.055682ms","start":"2023-08-30T21:38:57.871248Z","end":"2023-08-30T21:38:57.988304Z","steps":["trace[256221244] 'process raft request'  (duration: 114.328802ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:57.988516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.391451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T21:38:57.988595Z","caller":"traceutil/trace.go:171","msg":"trace[1971627891] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:380; }","duration":"117.48549ms","start":"2023-08-30T21:38:57.871102Z","end":"2023-08-30T21:38:57.988588Z","steps":["trace[1971627891] 'agreement among raft nodes before linearized reading'  (duration: 117.372465ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:57.988781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.554585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-w6q6w\" ","response":"range_response_count:1 size:3426"}
	{"level":"info","ts":"2023-08-30T21:38:57.989521Z","caller":"traceutil/trace.go:171","msg":"trace[18165746] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-w6q6w; range_end:; response_count:1; response_revision:380; }","duration":"118.291807ms","start":"2023-08-30T21:38:57.871218Z","end":"2023-08-30T21:38:57.989509Z","steps":["trace[18165746] 'agreement among raft nodes before linearized reading'  (duration: 117.529223ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:57.989861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.500454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-node-lease/\" range_end:\"/registry/serviceaccounts/kube-node-lease0\" ","response":"range_response_count:1 size:187"}
	{"level":"info","ts":"2023-08-30T21:38:57.990453Z","caller":"traceutil/trace.go:171","msg":"trace[1966608214] range","detail":"{range_begin:/registry/serviceaccounts/kube-node-lease/; range_end:/registry/serviceaccounts/kube-node-lease0; response_count:1; response_revision:380; }","duration":"119.094537ms","start":"2023-08-30T21:38:57.871346Z","end":"2023-08-30T21:38:57.990441Z","steps":["trace[1966608214] 'agreement among raft nodes before linearized reading'  (duration: 118.46445ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:58.866645Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.79096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T21:38:58.866714Z","caller":"traceutil/trace.go:171","msg":"trace[1556574286] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:390; }","duration":"102.870968ms","start":"2023-08-30T21:38:58.76383Z","end":"2023-08-30T21:38:58.866701Z","steps":["trace[1556574286] 'agreement among raft nodes before linearized reading'  (duration: 102.731079ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:38:59.885621Z","caller":"traceutil/trace.go:171","msg":"trace[247735801] linearizableReadLoop","detail":"{readStateIndex:419; appliedIndex:417; }","duration":"162.961054ms","start":"2023-08-30T21:38:59.722645Z","end":"2023-08-30T21:38:59.885606Z","steps":["trace[247735801] 'read index received'  (duration: 99.155626ms)","trace[247735801] 'applied index is now lower than readState.Index'  (duration: 63.804935ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T21:38:59.885839Z","caller":"traceutil/trace.go:171","msg":"trace[1211556327] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"163.962464ms","start":"2023-08-30T21:38:59.721868Z","end":"2023-08-30T21:38:59.885831Z","steps":["trace[1211556327] 'process raft request'  (duration: 99.925537ms)","trace[1211556327] 'compare'  (duration: 63.546819ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-30T21:38:59.886071Z","caller":"traceutil/trace.go:171","msg":"trace[1259055650] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"163.579006ms","start":"2023-08-30T21:38:59.722486Z","end":"2023-08-30T21:38:59.886065Z","steps":["trace[1259055650] 'process raft request'  (duration: 162.974864ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:38:59.886163Z","caller":"traceutil/trace.go:171","msg":"trace[1804539774] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"113.578017ms","start":"2023-08-30T21:38:59.772579Z","end":"2023-08-30T21:38:59.886157Z","steps":["trace[1804539774] 'process raft request'  (duration: 112.932275ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:38:59.886244Z","caller":"traceutil/trace.go:171","msg":"trace[435898958] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"113.582752ms","start":"2023-08-30T21:38:59.772656Z","end":"2023-08-30T21:38:59.886239Z","steps":["trace[435898958] 'process raft request'  (duration: 112.885834ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-30T21:38:59.88633Z","caller":"traceutil/trace.go:171","msg":"trace[2050693151] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"113.631187ms","start":"2023-08-30T21:38:59.772693Z","end":"2023-08-30T21:38:59.886325Z","steps":["trace[2050693151] 'process raft request'  (duration: 112.872969ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:59.886456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.798927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-tjvz9\" ","response":"range_response_count:1 size:3994"}
	{"level":"info","ts":"2023-08-30T21:38:59.886477Z","caller":"traceutil/trace.go:171","msg":"trace[1500977701] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-tjvz9; range_end:; response_count:1; response_revision:411; }","duration":"163.843842ms","start":"2023-08-30T21:38:59.722627Z","end":"2023-08-30T21:38:59.886471Z","steps":["trace[1500977701] 'agreement among raft nodes before linearized reading'  (duration: 163.772646ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:59.886578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.06006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-30T21:38:59.886597Z","caller":"traceutil/trace.go:171","msg":"trace[923821102] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:411; }","duration":"114.077767ms","start":"2023-08-30T21:38:59.772512Z","end":"2023-08-30T21:38:59.88659Z","steps":["trace[923821102] 'agreement among raft nodes before linearized reading'  (duration: 114.047466ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:59.886981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.250452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-08-30T21:38:59.887002Z","caller":"traceutil/trace.go:171","msg":"trace[1713057471] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:411; }","duration":"114.27446ms","start":"2023-08-30T21:38:59.772722Z","end":"2023-08-30T21:38:59.886997Z","steps":["trace[1713057471] 'agreement among raft nodes before linearized reading'  (duration: 114.232934ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:38:59.90129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.627534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3143"}
	{"level":"info","ts":"2023-08-30T21:38:59.907382Z","caller":"traceutil/trace.go:171","msg":"trace[564764625] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:411; }","duration":"134.726347ms","start":"2023-08-30T21:38:59.772636Z","end":"2023-08-30T21:38:59.907362Z","steps":["trace[564764625] 'agreement among raft nodes before linearized reading'  (duration: 114.447301ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-30T21:39:00.722304Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.403106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-08-30T21:39:00.722375Z","caller":"traceutil/trace.go:171","msg":"trace[126098820] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:457; }","duration":"106.475171ms","start":"2023-08-30T21:39:00.615882Z","end":"2023-08-30T21:39:00.722357Z","steps":["trace[126098820] 'agreement among raft nodes before linearized reading'  (duration: 90.357964ms)","trace[126098820] 'get authentication metadata'  (duration: 16.032054ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [8d454a39ac3a7d55d350cb912f495e5c4dae600f8428937e959976614d19bf1e] <==
	* 2023/08/30 21:40:13 GCP Auth Webhook started!
	2023/08/30 21:40:50 Ready to marshal response ...
	2023/08/30 21:40:50 Ready to write response ...
	2023/08/30 21:40:50 Ready to marshal response ...
	2023/08/30 21:40:50 Ready to write response ...
	2023/08/30 21:40:50 Ready to marshal response ...
	2023/08/30 21:40:50 Ready to write response ...
	2023/08/30 21:40:53 Ready to marshal response ...
	2023/08/30 21:40:53 Ready to write response ...
	2023/08/30 21:41:17 Ready to marshal response ...
	2023/08/30 21:41:17 Ready to write response ...
	2023/08/30 21:41:18 Ready to marshal response ...
	2023/08/30 21:41:18 Ready to write response ...
	2023/08/30 21:41:48 Ready to marshal response ...
	2023/08/30 21:41:48 Ready to write response ...
	2023/08/30 21:43:44 Ready to marshal response ...
	2023/08/30 21:43:44 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:44:10 up  6:26,  0 users,  load average: 1.33, 1.81, 2.15
	Linux addons-934429 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [7841b2729a7ef9c295641ad6c5a0a6faf672520f988494d3179fb3055f4f8e1c] <==
	* I0830 21:42:00.840902       1 main.go:227] handling current node
	I0830 21:42:10.844961       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:42:10.844987       1 main.go:227] handling current node
	I0830 21:42:20.855612       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:42:20.855636       1 main.go:227] handling current node
	I0830 21:42:30.859373       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:42:30.859401       1 main.go:227] handling current node
	I0830 21:42:40.868394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:42:40.868420       1 main.go:227] handling current node
	I0830 21:42:50.872471       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:42:50.872500       1 main.go:227] handling current node
	I0830 21:43:00.876535       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:00.876564       1 main.go:227] handling current node
	I0830 21:43:10.880887       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:10.880919       1 main.go:227] handling current node
	I0830 21:43:20.893753       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:20.893862       1 main.go:227] handling current node
	I0830 21:43:30.897665       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:30.897693       1 main.go:227] handling current node
	I0830 21:43:40.909741       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:40.909769       1 main.go:227] handling current node
	I0830 21:43:50.922090       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:43:50.922116       1 main.go:227] handling current node
	I0830 21:44:00.926559       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:44:00.926668       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [36e1b622df11dc750bba70c3d61b2fc7ac8648388e242c0d360ec2499180e6d1] <==
	* I0830 21:42:04.302375       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.303274       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:42:04.320681       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.320831       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:42:04.337911       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.337971       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:42:04.351286       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.351928       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:42:04.358652       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.358797       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0830 21:42:04.391564       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0830 21:42:04.391677       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0830 21:42:04.421827       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:42:04.421928       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:42:04.423125       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0830 21:42:04.424437       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0830 21:42:05.359439       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0830 21:42:05.392271       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0830 21:42:05.412083       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0830 21:42:42.681937       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0830 21:42:42.681964       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 21:42:42.682004       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 21:42:42.682013       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0830 21:43:44.799451       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.159.48"}
	
	* 
	* ==> kube-controller-manager [230dd41f9d28247a05df313963c1c0322bcde82825f050237ae5dd59d3e7504b] <==
	* W0830 21:43:18.513709       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:18.513742       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:43:18.646746       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:18.646780       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:43:43.685617       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:43.685649       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 21:43:44.498717       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0830 21:43:44.536322       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-dpfkw"
	I0830 21:43:44.549459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.516825ms"
	I0830 21:43:44.569419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.837871ms"
	I0830 21:43:44.569501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.918µs"
	I0830 21:43:44.574136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.355µs"
	I0830 21:43:47.820635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.839µs"
	I0830 21:43:48.827112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.349µs"
	I0830 21:43:49.823389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="80.607µs"
	W0830 21:43:51.596354       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:51.596388       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:43:52.598466       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:52.598498       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0830 21:43:57.597525       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 21:43:57.597584       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 21:44:01.561496       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0830 21:44:01.564373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="6.752µs"
	I0830 21:44:01.570257       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0830 21:44:02.854581       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="67.971µs"
	
	* 
	* ==> kube-proxy [c95255695e70b235030ccd9bd491563533398f1528be970953b657b3e9c61e0c] <==
	* I0830 21:39:01.450807       1 server_others.go:69] "Using iptables proxy"
	I0830 21:39:01.494398       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0830 21:39:01.672173       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0830 21:39:01.678697       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:39:01.678744       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0830 21:39:01.678753       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0830 21:39:01.678799       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:39:01.679120       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:39:01.679137       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:39:01.680139       1 config.go:188] "Starting service config controller"
	I0830 21:39:01.680206       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:39:01.680231       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:39:01.680235       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:39:01.680831       1 config.go:315] "Starting node config controller"
	I0830 21:39:01.680846       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:39:01.780720       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 21:39:01.780771       1 shared_informer.go:318] Caches are synced for service config
	I0830 21:39:01.781087       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a3c5c6930ad87933cbed9de4cf331bd950941471e1331f9a6e534328e3468051] <==
	* W0830 21:38:40.899785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 21:38:40.899799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 21:38:40.899846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:38:40.899860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 21:38:40.899914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0830 21:38:40.899928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0830 21:38:40.899991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:38:40.900006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 21:38:40.900054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:38:40.900067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0830 21:38:40.900199       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:38:40.900239       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:38:41.738197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:38:41.738319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 21:38:41.753398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:38:41.753435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0830 21:38:41.753499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 21:38:41.753516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 21:38:41.767047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 21:38:41.767161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 21:38:41.859591       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:38:41.859628       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:38:41.872132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:38:41.872165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0830 21:38:44.990628       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 30 21:43:50 addons-934429 kubelet[1361]: E0830 21:43:50.279279    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(77abf737-6a2d-4313-a01a-8bebbde6bc58)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="77abf737-6a2d-4313-a01a-8bebbde6bc58"
	Aug 30 21:43:54 addons-934429 kubelet[1361]: E0830 21:43:54.667321    1361 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/232a9233b744820ea60e2f90d7f00ff1e8e284f76c0718525a7fa2dc07cfc72b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/232a9233b744820ea60e2f90d7f00ff1e8e284f76c0718525a7fa2dc07cfc72b/diff: no such file or directory, extraDiskErr: <nil>
	Aug 30 21:44:00 addons-934429 kubelet[1361]: I0830 21:44:00.790877    1361 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgnpk\" (UniqueName: \"kubernetes.io/projected/77abf737-6a2d-4313-a01a-8bebbde6bc58-kube-api-access-sgnpk\") pod \"77abf737-6a2d-4313-a01a-8bebbde6bc58\" (UID: \"77abf737-6a2d-4313-a01a-8bebbde6bc58\") "
	Aug 30 21:44:00 addons-934429 kubelet[1361]: I0830 21:44:00.795938    1361 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77abf737-6a2d-4313-a01a-8bebbde6bc58-kube-api-access-sgnpk" (OuterVolumeSpecName: "kube-api-access-sgnpk") pod "77abf737-6a2d-4313-a01a-8bebbde6bc58" (UID: "77abf737-6a2d-4313-a01a-8bebbde6bc58"). InnerVolumeSpecName "kube-api-access-sgnpk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 30 21:44:00 addons-934429 kubelet[1361]: I0830 21:44:00.832305    1361 scope.go:117] "RemoveContainer" containerID="1c815a6f2b25d3de1535c89d1805e59a40d9f37c6223a41308ee882d14f40c60"
	Aug 30 21:44:00 addons-934429 kubelet[1361]: I0830 21:44:00.891838    1361 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sgnpk\" (UniqueName: \"kubernetes.io/projected/77abf737-6a2d-4313-a01a-8bebbde6bc58-kube-api-access-sgnpk\") on node \"addons-934429\" DevicePath \"\""
	Aug 30 21:44:02 addons-934429 kubelet[1361]: E0830 21:44:02.160784    1361 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f13829eed23a3c5cfdb8544dde6417a36d90b157f3d5169903a6c8d601b86ceb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f13829eed23a3c5cfdb8544dde6417a36d90b157f3d5169903a6c8d601b86ceb/diff: no such file or directory, extraDiskErr: <nil>
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.278719    1361 scope.go:117] "RemoveContainer" containerID="e0188a4714997b2547e42a8f1949f3d3de76a0dabca6e4442971fd187d116e92"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.280212    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3e85a2f8-ce8e-43b0-95f9-aef56f7a5384" path="/var/lib/kubelet/pods/3e85a2f8-ce8e-43b0-95f9-aef56f7a5384/volumes"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.281902    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="77abf737-6a2d-4313-a01a-8bebbde6bc58" path="/var/lib/kubelet/pods/77abf737-6a2d-4313-a01a-8bebbde6bc58/volumes"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.290033    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7bd2e659-b1cd-44a8-8fcf-0967b37fabcd" path="/var/lib/kubelet/pods/7bd2e659-b1cd-44a8-8fcf-0967b37fabcd/volumes"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.839320    1361 scope.go:117] "RemoveContainer" containerID="e0188a4714997b2547e42a8f1949f3d3de76a0dabca6e4442971fd187d116e92"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: I0830 21:44:02.839526    1361 scope.go:117] "RemoveContainer" containerID="2fd4dd84826177ebd05fabe76fa6b2146293677ac0df19e9a82b56efbcb667c4"
	Aug 30 21:44:02 addons-934429 kubelet[1361]: E0830 21:44:02.839794    1361 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-dpfkw_default(646b5152-26b3-4225-94c7-08be31970d85)\"" pod="default/hello-world-app-5d77478584-dpfkw" podUID="646b5152-26b3-4225-94c7-08be31970d85"
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.845501    1361 scope.go:117] "RemoveContainer" containerID="62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732"
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.867813    1361 scope.go:117] "RemoveContainer" containerID="62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732"
	Aug 30 21:44:04 addons-934429 kubelet[1361]: E0830 21:44:04.868219    1361 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732\": container with ID starting with 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732 not found: ID does not exist" containerID="62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732"
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.868269    1361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732"} err="failed to get container status \"62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732\": rpc error: code = NotFound desc = could not find container \"62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732\": container with ID starting with 62a32468ef0ed11390e78198eceed7d8ecbce022c639a902836eb05186d5b732 not found: ID does not exist"
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.921846    1361 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr7r4\" (UniqueName: \"kubernetes.io/projected/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-kube-api-access-xr7r4\") pod \"ef64dc5d-db5e-4efb-83e7-4dfe0634da00\" (UID: \"ef64dc5d-db5e-4efb-83e7-4dfe0634da00\") "
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.921932    1361 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-webhook-cert\") pod \"ef64dc5d-db5e-4efb-83e7-4dfe0634da00\" (UID: \"ef64dc5d-db5e-4efb-83e7-4dfe0634da00\") "
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.925384    1361 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-kube-api-access-xr7r4" (OuterVolumeSpecName: "kube-api-access-xr7r4") pod "ef64dc5d-db5e-4efb-83e7-4dfe0634da00" (UID: "ef64dc5d-db5e-4efb-83e7-4dfe0634da00"). InnerVolumeSpecName "kube-api-access-xr7r4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 30 21:44:04 addons-934429 kubelet[1361]: I0830 21:44:04.927841    1361 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ef64dc5d-db5e-4efb-83e7-4dfe0634da00" (UID: "ef64dc5d-db5e-4efb-83e7-4dfe0634da00"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:44:05 addons-934429 kubelet[1361]: I0830 21:44:05.022238    1361 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xr7r4\" (UniqueName: \"kubernetes.io/projected/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-kube-api-access-xr7r4\") on node \"addons-934429\" DevicePath \"\""
	Aug 30 21:44:05 addons-934429 kubelet[1361]: I0830 21:44:05.022277    1361 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef64dc5d-db5e-4efb-83e7-4dfe0634da00-webhook-cert\") on node \"addons-934429\" DevicePath \"\""
	Aug 30 21:44:06 addons-934429 kubelet[1361]: I0830 21:44:06.280527    1361 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ef64dc5d-db5e-4efb-83e7-4dfe0634da00" path="/var/lib/kubelet/pods/ef64dc5d-db5e-4efb-83e7-4dfe0634da00/volumes"
	
	* 
	* ==> storage-provisioner [d2b400eb5cc4ba92b190651b088bf310047adc79549744968d6338c842584a35] <==
	* I0830 21:39:32.134334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:39:32.237242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:39:32.237359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:39:32.299303       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:39:32.301179       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-934429_742a4079-8733-4a18-a687-a8fce3875da9!
	I0830 21:39:32.305216       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"217d8d02-9dab-4550-909e-744f283458e2", APIVersion:"v1", ResourceVersion:"850", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-934429_742a4079-8733-4a18-a687-a8fce3875da9 became leader
	I0830 21:39:32.402987       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-934429_742a4079-8733-4a18-a687-a8fce3875da9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-934429 -n addons-934429
helpers_test.go:261: (dbg) Run:  kubectl --context addons-934429 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (174.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1a771d92-1ee8-4f9c-a683-dc6c42158c24] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013602894s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-540436 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-540436 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-540436 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-540436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87c99d6f-d364-47fa-813a-ea2f2e5c3712] Pending
helpers_test.go:344: "sp-pod" [87c99d6f-d364-47fa-813a-ea2f2e5c3712] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0830 21:50:43.073178  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:51:10.755062  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-540436 -n functional-540436
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-08-30 21:51:57.692910886 +0000 UTC m=+872.495065482
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-540436 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-540436 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-540436/192.168.49.2
Start Time:       Wed, 30 Aug 2023 21:48:57 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2c7nd (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-2c7nd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-540436
Warning  Failed     2m25s                kubelet            Failed to pull image "docker.io/nginx": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": net/http: TLS handshake timeout
Warning  Failed     111s                 kubelet            Failed to pull image "docker.io/nginx": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": read tcp 192.168.49.2:53552->44.205.64.79:443: read: connection reset by peer
Normal   Pulling    60s (x4 over 3m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     50s (x2 over 2m49s)  kubelet            Failed to pull image "docker.io/nginx": Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fnginx%3Apull&service=registry.docker.io": net/http: TLS handshake timeout
Warning  Failed     50s (x4 over 2m49s)  kubelet            Error: ErrImagePull
Warning  Failed     35s (x6 over 2m49s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    22s (x7 over 2m49s)  kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-540436 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-540436 logs sp-pod -n default: exit status 1 (103.062263ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-540436 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-540436
helpers_test.go:235: (dbg) docker inspect functional-540436:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd",
	        "Created": "2023-08-30T21:45:38.975618283Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1005035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T21:45:39.326745597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/hosts",
	        "LogPath": "/var/lib/docker/containers/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd-json.log",
	        "Name": "/functional-540436",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-540436:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-540436",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49e89f99ed9a1ecba448b9cdc667f521be04180ad81eba828a30b033e8388251-init/diff:/var/lib/docker/overlay2/5a8abadbbe02000d4a1cbd31235f9b3bba474489fe1515f2d12f946a2d011f32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49e89f99ed9a1ecba448b9cdc667f521be04180ad81eba828a30b033e8388251/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49e89f99ed9a1ecba448b9cdc667f521be04180ad81eba828a30b033e8388251/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49e89f99ed9a1ecba448b9cdc667f521be04180ad81eba828a30b033e8388251/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-540436",
	                "Source": "/var/lib/docker/volumes/functional-540436/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-540436",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-540436",
	                "name.minikube.sigs.k8s.io": "functional-540436",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "474b54ebe027e907697b88033c9c85a241a1c80aa3a6ba1e230937a49046d076",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34023"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34022"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34019"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34021"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34020"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/474b54ebe027",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-540436": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e181e8543ebe",
	                        "functional-540436"
	                    ],
	                    "NetworkID": "cdcbce7d0028fa300d9cd821a4d1bd4f031e1e36d3ef1f42ba3ad87f0ed3bf6b",
	                    "EndpointID": "1b4e7ae19a4b73f52188df338dc5f146cf0bb725bbef27a3b42e49df08379afa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-540436 -n functional-540436
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 logs -n 25: (1.998074945s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-540436 ssh sudo cat                                         | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | /usr/share/ca-certificates/9898252.pem                                 |                   |         |         |                     |                     |
	| image   | functional-540436 image load --daemon                                  | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh     | functional-540436 ssh sudo cat                                         | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                              |                   |         |         |                     |                     |
	| ssh     | functional-540436 ssh sudo cat                                         | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | /etc/test/nested/copy/989825/hosts                                     |                   |         |         |                     |                     |
	| image   | functional-540436 image ls                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| image   | functional-540436 image load --daemon                                  | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image ls                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| image   | functional-540436 image load --daemon                                  | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| service | functional-540436 service list                                         | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| service | functional-540436 service list                                         | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | -o json                                                                |                   |         |         |                     |                     |
	| service | functional-540436 service                                              | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | --namespace=default --https                                            |                   |         |         |                     |                     |
	|         | --url hello-node                                                       |                   |         |         |                     |                     |
	| service | functional-540436                                                      | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | service hello-node --url                                               |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                       |                   |         |         |                     |                     |
	| service | functional-540436 service                                              | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | hello-node --url                                                       |                   |         |         |                     |                     |
	| ssh     | functional-540436 ssh echo                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | hello                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-540436 ssh cat                                              | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | /etc/hostname                                                          |                   |         |         |                     |                     |
	| tunnel  | functional-540436 tunnel                                               | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-540436 tunnel                                               | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image ls                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| tunnel  | functional-540436 tunnel                                               | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image save                                           | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image rm                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image ls                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| image   | functional-540436 image load                                           | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-540436 image ls                                             | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	| image   | functional-540436 image save --daemon                                  | functional-540436 | jenkins | v1.31.2 | 30 Aug 23 21:48 UTC | 30 Aug 23 21:48 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-540436               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:47:43
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:47:43.100284 1009842 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:47:43.100449 1009842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:47:43.100453 1009842 out.go:309] Setting ErrFile to fd 2...
	I0830 21:47:43.100457 1009842 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:47:43.100717 1009842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 21:47:43.101060 1009842 out.go:303] Setting JSON to false
	I0830 21:47:43.102347 1009842 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23398,"bootTime":1693408666,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:47:43.102419 1009842 start.go:138] virtualization:  
	I0830 21:47:43.104796 1009842 out.go:177] * [functional-540436] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:47:43.106682 1009842 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 21:47:43.108276 1009842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:47:43.106936 1009842 notify.go:220] Checking for updates...
	I0830 21:47:43.111330 1009842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:47:43.113439 1009842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:47:43.115019 1009842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 21:47:43.116748 1009842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:47:43.118862 1009842 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:47:43.118957 1009842 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:47:43.143886 1009842 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:47:43.143981 1009842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:47:43.243860 1009842 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-30 21:47:43.233054891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:47:43.243965 1009842 docker.go:294] overlay module found
	I0830 21:47:43.246055 1009842 out.go:177] * Using the docker driver based on existing profile
	I0830 21:47:43.248255 1009842 start.go:298] selected driver: docker
	I0830 21:47:43.248266 1009842 start.go:902] validating driver "docker" against &{Name:functional-540436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:47:43.248374 1009842 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:47:43.248472 1009842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:47:43.331346 1009842 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-30 21:47:43.319824684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:47:43.331778 1009842 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:47:43.331793 1009842 cni.go:84] Creating CNI manager for ""
	I0830 21:47:43.331797 1009842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:47:43.331804 1009842 start_flags.go:319] config:
	{Name:functional-540436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:47:43.335704 1009842 out.go:177] * Starting control plane node functional-540436 in cluster functional-540436
	I0830 21:47:43.337282 1009842 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 21:47:43.339349 1009842 out.go:177] * Pulling base image ...
	I0830 21:47:43.342037 1009842 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:47:43.342086 1009842 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0830 21:47:43.342101 1009842 cache.go:57] Caching tarball of preloaded images
	I0830 21:47:43.342116 1009842 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 21:47:43.342175 1009842 preload.go:174] Found /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0830 21:47:43.342183 1009842 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 21:47:43.342302 1009842 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/config.json ...
	I0830 21:47:43.360219 1009842 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0830 21:47:43.360233 1009842 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0830 21:47:43.360257 1009842 cache.go:195] Successfully downloaded all kic artifacts
	I0830 21:47:43.360313 1009842 start.go:365] acquiring machines lock for functional-540436: {Name:mke0200554361e8fe93393d354a3d21b65ed51e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:47:43.360392 1009842 start.go:369] acquired machines lock for "functional-540436" in 55.573µs
	I0830 21:47:43.360412 1009842 start.go:96] Skipping create...Using existing machine configuration
	I0830 21:47:43.360416 1009842 fix.go:54] fixHost starting: 
	I0830 21:47:43.360698 1009842 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
	I0830 21:47:43.380340 1009842 fix.go:102] recreateIfNeeded on functional-540436: state=Running err=<nil>
	W0830 21:47:43.380367 1009842 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 21:47:43.382553 1009842 out.go:177] * Updating the running docker "functional-540436" container ...
	I0830 21:47:43.384394 1009842 machine.go:88] provisioning docker machine ...
	I0830 21:47:43.384414 1009842 ubuntu.go:169] provisioning hostname "functional-540436"
	I0830 21:47:43.384486 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:43.408837 1009842 main.go:141] libmachine: Using SSH client type: native
	I0830 21:47:43.409329 1009842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34023 <nil> <nil>}
	I0830 21:47:43.409340 1009842 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-540436 && echo "functional-540436" | sudo tee /etc/hostname
	I0830 21:47:43.566673 1009842 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-540436
	
	I0830 21:47:43.566741 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:43.587766 1009842 main.go:141] libmachine: Using SSH client type: native
	I0830 21:47:43.588314 1009842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34023 <nil> <nil>}
	I0830 21:47:43.588330 1009842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-540436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-540436/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-540436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:47:43.738409 1009842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:47:43.738425 1009842 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 21:47:43.738452 1009842 ubuntu.go:177] setting up certificates
	I0830 21:47:43.738462 1009842 provision.go:83] configureAuth start
	I0830 21:47:43.738530 1009842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-540436
	I0830 21:47:43.758373 1009842 provision.go:138] copyHostCerts
	I0830 21:47:43.758428 1009842 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 21:47:43.758435 1009842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 21:47:43.758512 1009842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 21:47:43.758634 1009842 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 21:47:43.758638 1009842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 21:47:43.758668 1009842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 21:47:43.758788 1009842 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 21:47:43.758792 1009842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 21:47:43.758816 1009842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 21:47:43.758867 1009842 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.functional-540436 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-540436]
	I0830 21:47:44.533802 1009842 provision.go:172] copyRemoteCerts
	I0830 21:47:44.533877 1009842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:47:44.533919 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:44.555800 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:47:44.656375 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0830 21:47:44.686199 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 21:47:44.726602 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 21:47:44.758500 1009842 provision.go:86] duration metric: configureAuth took 1.020024089s
	I0830 21:47:44.758516 1009842 ubuntu.go:193] setting minikube options for container-runtime
	I0830 21:47:44.758713 1009842 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:47:44.758822 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:44.776882 1009842 main.go:141] libmachine: Using SSH client type: native
	I0830 21:47:44.777327 1009842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34023 <nil> <nil>}
	I0830 21:47:44.777340 1009842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:47:50.268811 1009842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:47:50.268836 1009842 machine.go:91] provisioned docker machine in 6.884426612s
	I0830 21:47:50.268849 1009842 start.go:300] post-start starting for "functional-540436" (driver="docker")
	I0830 21:47:50.268868 1009842 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:47:50.268946 1009842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:47:50.268983 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:50.288338 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:47:50.393436 1009842 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:47:50.397966 1009842 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 21:47:50.397992 1009842 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 21:47:50.398001 1009842 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 21:47:50.398007 1009842 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 21:47:50.398016 1009842 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 21:47:50.398082 1009842 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 21:47:50.398181 1009842 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 21:47:50.398277 1009842 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/test/nested/copy/989825/hosts -> hosts in /etc/test/nested/copy/989825
	I0830 21:47:50.398332 1009842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/989825
	I0830 21:47:50.408898 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 21:47:50.442650 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/test/nested/copy/989825/hosts --> /etc/test/nested/copy/989825/hosts (40 bytes)
	I0830 21:47:50.471022 1009842 start.go:303] post-start completed in 202.152638ms
	I0830 21:47:50.471094 1009842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 21:47:50.471143 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:50.489501 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:47:50.587784 1009842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 21:47:50.594137 1009842 fix.go:56] fixHost completed within 7.23371279s
	I0830 21:47:50.594151 1009842 start.go:83] releasing machines lock for "functional-540436", held for 7.233752815s
	I0830 21:47:50.594230 1009842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-540436
	I0830 21:47:50.613697 1009842 ssh_runner.go:195] Run: cat /version.json
	I0830 21:47:50.613743 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:50.613757 1009842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:47:50.613821 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:47:50.640837 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:47:50.640991 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:47:50.868789 1009842 ssh_runner.go:195] Run: systemctl --version
	I0830 21:47:50.878046 1009842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:47:51.055799 1009842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:47:51.061626 1009842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:47:51.073054 1009842 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 21:47:51.073149 1009842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:47:51.084684 1009842 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0830 21:47:51.084697 1009842 start.go:466] detecting cgroup driver to use...
	I0830 21:47:51.084730 1009842 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 21:47:51.084779 1009842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:47:51.100564 1009842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:47:51.115265 1009842 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:47:51.115322 1009842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:47:51.131881 1009842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:47:51.146392 1009842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:47:51.284014 1009842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:47:51.412639 1009842 docker.go:212] disabling docker service ...
	I0830 21:47:51.412693 1009842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:47:51.428912 1009842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:47:51.443715 1009842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:47:51.566666 1009842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:47:51.689031 1009842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:47:51.702394 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:47:51.721665 1009842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 21:47:51.721723 1009842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:47:51.733507 1009842 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:47:51.733563 1009842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:47:51.745344 1009842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:47:51.757399 1009842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:47:51.769123 1009842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:47:51.780187 1009842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:47:51.790586 1009842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:47:51.800951 1009842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:47:51.922953 1009842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:47:52.088814 1009842 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:47:52.088871 1009842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:47:52.093712 1009842 start.go:534] Will wait 60s for crictl version
	I0830 21:47:52.093775 1009842 ssh_runner.go:195] Run: which crictl
	I0830 21:47:52.098294 1009842 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:47:52.138912 1009842 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0830 21:47:52.138987 1009842 ssh_runner.go:195] Run: crio --version
	I0830 21:47:52.183194 1009842 ssh_runner.go:195] Run: crio --version
	I0830 21:47:52.234847 1009842 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0830 21:47:52.236985 1009842 cli_runner.go:164] Run: docker network inspect functional-540436 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 21:47:52.254490 1009842 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0830 21:47:52.261542 1009842 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0830 21:47:52.263169 1009842 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:47:52.263249 1009842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:47:52.315601 1009842 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:47:52.315611 1009842 crio.go:415] Images already preloaded, skipping extraction
	I0830 21:47:52.315666 1009842 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:47:52.364022 1009842 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 21:47:52.364033 1009842 cache_images.go:84] Images are preloaded, skipping loading
	I0830 21:47:52.364115 1009842 ssh_runner.go:195] Run: crio config
	I0830 21:47:52.421672 1009842 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0830 21:47:52.421703 1009842 cni.go:84] Creating CNI manager for ""
	I0830 21:47:52.421711 1009842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:47:52.421720 1009842 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:47:52.421739 1009842 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-540436 NodeName:functional-540436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 21:47:52.421871 1009842 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-540436"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:47:52.421944 1009842 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-540436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0830 21:47:52.422008 1009842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 21:47:52.435037 1009842 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:47:52.435106 1009842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:47:52.446255 1009842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I0830 21:47:52.468579 1009842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 21:47:52.490405 1009842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I0830 21:47:52.512100 1009842 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0830 21:47:52.516860 1009842 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436 for IP: 192.168.49.2
	I0830 21:47:52.516880 1009842 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1c893f087ee62e9f919bfa6a6de84891ee8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:47:52.517020 1009842 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key
	I0830 21:47:52.517059 1009842 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key
	I0830 21:47:52.517147 1009842 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.key
	I0830 21:47:52.517196 1009842 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/apiserver.key.dd3b5fb2
	I0830 21:47:52.517235 1009842 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/proxy-client.key
	I0830 21:47:52.517345 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem (1338 bytes)
	W0830 21:47:52.517371 1009842 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825_empty.pem, impossibly tiny 0 bytes
	I0830 21:47:52.517379 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 21:47:52.517402 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem (1082 bytes)
	I0830 21:47:52.517425 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:47:52.517446 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem (1679 bytes)
	I0830 21:47:52.517488 1009842 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 21:47:52.518082 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:47:52.547332 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 21:47:52.575970 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:47:52.604505 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:47:52.632821 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:47:52.662966 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:47:52.691771 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:47:52.720772 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 21:47:52.750592 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /usr/share/ca-certificates/9898252.pem (1708 bytes)
	I0830 21:47:52.780442 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:47:52.809095 1009842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem --> /usr/share/ca-certificates/989825.pem (1338 bytes)
	I0830 21:47:52.837636 1009842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:47:52.858638 1009842 ssh_runner.go:195] Run: openssl version
	I0830 21:47:52.865580 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9898252.pem && ln -fs /usr/share/ca-certificates/9898252.pem /etc/ssl/certs/9898252.pem"
	I0830 21:47:52.877033 1009842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9898252.pem
	I0830 21:47:52.881761 1009842 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 21:47:52.881829 1009842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9898252.pem
	I0830 21:47:52.890167 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9898252.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:47:52.900787 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:47:52.912334 1009842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:47:52.916949 1009842 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:47:52.917023 1009842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:47:52.925703 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:47:52.936826 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989825.pem && ln -fs /usr/share/ca-certificates/989825.pem /etc/ssl/certs/989825.pem"
	I0830 21:47:52.948670 1009842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989825.pem
	I0830 21:47:52.953198 1009842 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 21:47:52.953257 1009842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989825.pem
	I0830 21:47:52.961766 1009842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/989825.pem /etc/ssl/certs/51391683.0"
	I0830 21:47:52.972435 1009842 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:47:52.976837 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0830 21:47:52.985297 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0830 21:47:52.993702 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0830 21:47:53.001980 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0830 21:47:53.010982 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0830 21:47:53.020367 1009842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0830 21:47:53.029069 1009842 kubeadm.go:404] StartCluster: {Name:functional-540436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:47:53.029177 1009842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:47:53.029231 1009842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:47:53.073333 1009842 cri.go:89] found id: "9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab"
	I0830 21:47:53.073345 1009842 cri.go:89] found id: "37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0"
	I0830 21:47:53.073349 1009842 cri.go:89] found id: "0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41"
	I0830 21:47:53.073353 1009842 cri.go:89] found id: "80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411"
	I0830 21:47:53.073356 1009842 cri.go:89] found id: "76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8"
	I0830 21:47:53.073360 1009842 cri.go:89] found id: "858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99"
	I0830 21:47:53.073363 1009842 cri.go:89] found id: "0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c"
	I0830 21:47:53.073366 1009842 cri.go:89] found id: "e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01"
	I0830 21:47:53.073370 1009842 cri.go:89] found id: ""
	I0830 21:47:53.073423 1009842 ssh_runner.go:195] Run: sudo runc list -f json
	I0830 21:47:53.099468 1009842 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c/userdata","rootfs":"/var/lib/containers/storage/overlay/006c018cdb641d2872d1f03b4758c339b131f23537816d879f3c86cd502563d9/merged","created":"2023-08-30T21:47:13.010504364Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eb035af2","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eb035af2\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.721359498Z","io.kubernetes.cri-o.Image":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-540436\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b0a0b4248b41e7e8cd104c788b99e981\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-540436_b0a0b4248b41e7e8cd104c788b99e981/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",
\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/006c018cdb641d2872d1f03b4758c339b131f23537816d879f3c86cd502563d9/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-540436_kube-system_b0a0b4248b41e7e8cd104c788b99e981_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-540436_kube-system_b0a0b4248b41e7e8cd104c788b99e981_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b0a0b4248b41e7e8cd104c788b99e981/containers/kube-apiserver/be47d5ff\",\"readonly\":fa
lse,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b0a0b4248b41e7e8cd104c788b99e981/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"
kube-apiserver-functional-540436","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b0a0b4248b41e7e8cd104c788b99e981","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"b0a0b4248b41e7e8cd104c788b99e981","kubernetes.io/config.seen":"2023-08-30T21:45:56.970345932Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41/userdata","rootfs":"/var/lib/containers/storage/overlay/b123b488ee9be1fecc5e9b411b4cf3ad0de97906a8b9be0fa284cdd52d3a1828/merged","created":"2023-08-30T21:47:13.005833026Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.
container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.807822549Z","io.kubernetes.cri-o.Image":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.n
ame\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-540436\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"14ca21ff17e47db22df697501433bc1f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-540436_14ca21ff17e47db22df697501433bc1f/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b123b488ee9be1fecc5e9b411b4cf3ad0de97906a8b9be0fa284cdd52d3a1828/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-540436_kube-system_14ca21ff17e47db22df697501433bc1f_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4","io
.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-540436_kube-system_14ca21ff17e47db22df697501433bc1f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/14ca21ff17e47db22df697501433bc1f/containers/kube-controller-manager/d8825d3b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/14ca21ff17e47db22df697501433bc1f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-540436","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.k
ubernetes.pod.uid":"14ca21ff17e47db22df697501433bc1f","kubernetes.io/config.hash":"14ca21ff17e47db22df697501433bc1f","kubernetes.io/config.seen":"2023-08-30T21:45:56.970338236Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0/userdata","rootfs":"/var/lib/containers/storage/overlay/a9108d6272a42e04e680ff2481bb18b2ca4e29d3d934e764dc208b8a3457f6dd/merged","created":"2023-08-30T21:47:13.174167792Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4bcfb035","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"4
bcfb035\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.81692447Z","io.kubernetes.cri-o.Image":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-zqwx8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"af8958a4-c256-4ab8-bf6f-65ca6f33eb1d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_k
ube-proxy-zqwx8_af8958a4-c256-4ab8-bf6f-65ca6f33eb1d/kube-proxy/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a9108d6272a42e04e680ff2481bb18b2ca4e29d3d934e764dc208b8a3457f6dd/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-zqwx8_kube-system_af8958a4-c256-4ab8-bf6f-65ca6f33eb1d_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-zqwx8_kube-system_af8958a4-c256-4ab8-bf6f-65ca6f33eb1d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtabl
es.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/af8958a4-c256-4ab8-bf6f-65ca6f33eb1d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/af8958a4-c256-4ab8-bf6f-65ca6f33eb1d/containers/kube-proxy/6009261f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/af8958a4-c256-4ab8-bf6f-65ca6f33eb1d/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/af8958a4-c256-4ab8-bf6f-65ca6f33eb1d/volumes/kubernetes.io~projected/kube-api-acc
ess-gfwzg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-zqwx8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"af8958a4-c256-4ab8-bf6f-65ca6f33eb1d","kubernetes.io/config.seen":"2023-08-30T21:46:16.969970680Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8/userdata","rootfs":"/var/lib/containers/storage/overlay/baed8dc4516ad77e56dc0644bbae97308b4399e7dc12d6789454654f497cb81d/merged","created":"2023-08-30T21:47:12.950951259Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMess
agePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.749880856Z","io.kubernetes.cri-o.Image":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-function
al-540436\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6ad528117c1b36e8cfe57a31011301fa\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-540436_6ad528117c1b36e8cfe57a31011301fa/kube-scheduler/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/baed8dc4516ad77e56dc0644bbae97308b4399e7dc12d6789454654f497cb81d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-540436_kube-system_6ad528117c1b36e8cfe57a31011301fa_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-540436_kube-system_6ad528117c1b36e8cfe57a31011301fa_0","io.kubernetes.cri-o.Se
ccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6ad528117c1b36e8cfe57a31011301fa/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6ad528117c1b36e8cfe57a31011301fa/containers/kube-scheduler/c88ecbc8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-540436","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6ad528117c1b36e8cfe57a31011301fa","kubernetes.io/config.hash":"6ad528117c1b36e8cfe57a31011301fa","kubernetes.io/conf
ig.seen":"2023-08-30T21:45:56.970343430Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411/userdata","rootfs":"/var/lib/containers/storage/overlay/508ae8800d4aa88bab79e9e5041093df04aae9c4643390bc4230a3930cdb529b/merged","created":"2023-08-30T21:47:13.032498132Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"573bc5db","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"573bc5db\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-
log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.775905889Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1a771d92-1ee8-4f9c-a683-dc6c42158c24\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_1a771d92-1ee8-4f9c-a683-dc6c42158c24/storage-provisioner/2.log","io.kubernetes.cri-o.Me
tadata":"{\"name\":\"storage-provisioner\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/508ae8800d4aa88bab79e9e5041093df04aae9c4643390bc4230a3930cdb529b/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_1a771d92-1ee8-4f9c-a683-dc6c42158c24_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_1a771d92-1ee8-4f9c-a683-dc6c42158c24_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1a771d92-1ee8-4f9c-a683-dc6c42158c24/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1a771d92-1ee8-4f9c-a683-dc6c42158c24/containers/storage-provisioner/641ff94d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1a771d92-1ee8-4f9c-a683-dc6c42158c24/volumes/kubernetes.io~projected/kube-api-access-jtvmg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1a771d92-1ee8-4f9c-a683-dc6c42158c24","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/
mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-08-30T21:46:48.116998727Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99/userdata","rootfs":"/var/lib/containers/storage/overlay/bb9548831acd6aed61b4aeb215cdfe7c8d70dcdde43862f00f7018c348a69f
73/merged","created":"2023-08-30T21:47:12.974969895Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9027a541","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9027a541\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.742656282Z","io.kubernetes.cri-o.Image":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.ImageName":"doc
ker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-wgctp\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c8716d65-64d3-4833-a4c1-40109e33d25e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-wgctp_c8716d65-64d3-4833-a4c1-40109e33d25e/kindnet-cni/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bb9548831acd6aed61b4aeb215cdfe7c8d70dcdde43862f00f7018c348a69f73/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-wgctp_kube-system_c8716d65-64d3-4833-a4c1-40109e33d25e_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b/userdata/resolv.conf","io.kubernetes.cri-o.S
andboxID":"be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-wgctp_kube-system_c8716d65-64d3-4833-a4c1-40109e33d25e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c8716d65-64d3-4833-a4c1-40109e33d25e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c8716d65-64d3-4833-a4c1-40109e33d25e/containers/kindnet-cni/8a799807\",\"readonly\":false,\"propagation\":0,\"seli
nux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/c8716d65-64d3-4833-a4c1-40109e33d25e/volumes/kubernetes.io~projected/kube-api-access-cjdsd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-wgctp","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c8716d65-64d3-4833-a4c1-40109e33d25e","kubernetes.io/config.seen":"2023-08-30T21:46:16.988127538Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab/userdata","rootfs":"/var/lib/containers/storage/
overlay/9e911bd9a489a956df018942a0ab7501e789ba43d9fba67eb2731200c9227c83/merged","created":"2023-08-30T21:47:32.370758326Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c9acef06","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c9acef06\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\
"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:32.312374262Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-7rg2p\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"29f381de-64bf-4485-adb4-935e61de0003\"}","io.kuberne
tes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-7rg2p_29f381de-64bf-4485-adb4-935e61de0003/coredns/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9e911bd9a489a956df018942a0ab7501e789ba43d9fba67eb2731200c9227c83/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-7rg2p_kube-system_29f381de-64bf-4485-adb4-935e61de0003_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-7rg2p_kube-system_29f381de-64bf-4485-adb4-935e61de0003_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"con
tainer_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/29f381de-64bf-4485-adb4-935e61de0003/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/29f381de-64bf-4485-adb4-935e61de0003/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/29f381de-64bf-4485-adb4-935e61de0003/containers/coredns/3aafeb44\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/29f381de-64bf-4485-adb4-935e61de0003/volumes/kubernetes.io~projected/kube-api-access-wkp8t\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-7rg2p","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"3
0","io.kubernetes.pod.uid":"29f381de-64bf-4485-adb4-935e61de0003","kubernetes.io/config.seen":"2023-08-30T21:46:48.108749523Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01/userdata","rootfs":"/var/lib/containers/storage/overlay/f9b5a8779aa5819a3fb634e73d45b3437c79fdf3fa6234ca31b2c3df3ee928a8/merged","created":"2023-08-30T21:47:12.813006518Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d9a1c4d1","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d9a1c4d1\",\"io.kubernetes.container.restartCount\":\"2\",\
"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-08-30T21:47:12.664012798Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-540436\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cc094e3450a4377ae87b2051eed602ce\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-540436_cc094e3450a4377ae87b2051eed602ce/etcd/2.log",
"io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f9b5a8779aa5819a3fb634e73d45b3437c79fdf3fa6234ca31b2c3df3ee928a8/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-540436_kube-system_cc094e3450a4377ae87b2051eed602ce_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-540436_kube-system_cc094e3450a4377ae87b2051eed602ce_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cc094e3450a4377ae87b2051eed602ce/etc-hosts\",\"readonly\":false,\"propagat
ion\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cc094e3450a4377ae87b2051eed602ce/containers/etcd/2717d559\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-540436","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cc094e3450a4377ae87b2051eed602ce","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"cc094e3450a4377ae87b2051eed602ce","kubernetes.io/config.seen":"2023-08-30T21:45:56.970344825Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0830 21:47:53.100013 1009842 cri.go:126] list returned 8 containers
	I0830 21:47:53.100020 1009842 cri.go:129] container: {ID:0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c Status:stopped}
	I0830 21:47:53.100033 1009842 cri.go:135] skipping {0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100040 1009842 cri.go:129] container: {ID:0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41 Status:stopped}
	I0830 21:47:53.100048 1009842 cri.go:135] skipping {0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100053 1009842 cri.go:129] container: {ID:37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0 Status:stopped}
	I0830 21:47:53.100059 1009842 cri.go:135] skipping {37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100064 1009842 cri.go:129] container: {ID:76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8 Status:stopped}
	I0830 21:47:53.100070 1009842 cri.go:135] skipping {76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100075 1009842 cri.go:129] container: {ID:80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411 Status:stopped}
	I0830 21:47:53.100080 1009842 cri.go:135] skipping {80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100085 1009842 cri.go:129] container: {ID:858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99 Status:stopped}
	I0830 21:47:53.100091 1009842 cri.go:135] skipping {858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100095 1009842 cri.go:129] container: {ID:9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab Status:stopped}
	I0830 21:47:53.100101 1009842 cri.go:135] skipping {9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100106 1009842 cri.go:129] container: {ID:e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01 Status:stopped}
	I0830 21:47:53.100111 1009842 cri.go:135] skipping {e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01 stopped}: state = "stopped", want "paused"
	I0830 21:47:53.100159 1009842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:47:53.110916 1009842 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0830 21:47:53.110926 1009842 kubeadm.go:636] restartCluster start
	I0830 21:47:53.110980 1009842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0830 21:47:53.121220 1009842 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:47:53.121762 1009842 kubeconfig.go:92] found "functional-540436" server: "https://192.168.49.2:8441"
	I0830 21:47:53.123505 1009842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0830 21:47:53.134290 1009842 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-08-30 21:45:48.835102967 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-08-30 21:47:52.506941937 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0830 21:47:53.134299 1009842 kubeadm.go:1128] stopping kube-system containers ...
	I0830 21:47:53.134308 1009842 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0830 21:47:53.134360 1009842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:47:53.177602 1009842 cri.go:89] found id: "9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab"
	I0830 21:47:53.177615 1009842 cri.go:89] found id: "37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0"
	I0830 21:47:53.177619 1009842 cri.go:89] found id: "0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41"
	I0830 21:47:53.177622 1009842 cri.go:89] found id: "80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411"
	I0830 21:47:53.177626 1009842 cri.go:89] found id: "76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8"
	I0830 21:47:53.177630 1009842 cri.go:89] found id: "858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99"
	I0830 21:47:53.177633 1009842 cri.go:89] found id: "0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c"
	I0830 21:47:53.177647 1009842 cri.go:89] found id: "e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01"
	I0830 21:47:53.177650 1009842 cri.go:89] found id: ""
	I0830 21:47:53.177655 1009842 cri.go:234] Stopping containers: [9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab 37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0 0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41 80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411 76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8 858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99 0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01]
	I0830 21:47:53.177709 1009842 ssh_runner.go:195] Run: which crictl
	I0830 21:47:53.182350 1009842 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab 37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0 0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41 80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411 76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8 858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99 0724e2d048ef9b47601e966913d67721ec06ff7c6d1743c48d187bf98d9a7b7c e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01
	I0830 21:47:53.248133 1009842 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0830 21:47:53.348341 1009842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:47:53.359142 1009842 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Aug 30 21:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 30 21:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 30 21:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Aug 30 21:45 /etc/kubernetes/scheduler.conf
	
	I0830 21:47:53.359201 1009842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0830 21:47:53.370038 1009842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0830 21:47:53.380724 1009842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0830 21:47:53.391233 1009842 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:47:53.391291 1009842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0830 21:47:53.401962 1009842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0830 21:47:53.412492 1009842 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0830 21:47:53.412548 1009842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0830 21:47:53.422963 1009842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:47:53.433781 1009842 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0830 21:47:53.433794 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:47:53.496183 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:47:56.020893 1009842 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.524682905s)
	I0830 21:47:56.020912 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:47:56.230700 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:47:56.316992 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:47:56.413577 1009842 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:47:56.413647 1009842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:47:56.436711 1009842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:47:56.954072 1009842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:47:57.454330 1009842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:47:57.482263 1009842 api_server.go:72] duration metric: took 1.068686023s to wait for apiserver process to appear ...
	I0830 21:47:57.482276 1009842 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:47:57.482292 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:47:57.482587 1009842 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0830 21:47:57.482609 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:47:57.482778 1009842 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0830 21:47:57.983426 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:01.114934 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 21:48:01.114949 1009842 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 21:48:01.114959 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:01.241147 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0830 21:48:01.241167 1009842 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0830 21:48:01.483486 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:01.499376 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 21:48:01.499394 1009842 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 21:48:01.982995 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:02.003499 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0830 21:48:02.003516 1009842 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0830 21:48:02.483089 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:02.492512 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0830 21:48:02.509361 1009842 api_server.go:141] control plane version: v1.28.1
	I0830 21:48:02.509378 1009842 api_server.go:131] duration metric: took 5.027097109s to wait for apiserver health ...
	I0830 21:48:02.509385 1009842 cni.go:84] Creating CNI manager for ""
	I0830 21:48:02.509391 1009842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:48:02.511805 1009842 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 21:48:02.513644 1009842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:48:02.521198 1009842 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 21:48:02.521209 1009842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:48:02.558975 1009842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:48:03.358575 1009842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:48:03.368070 1009842 system_pods.go:59] 8 kube-system pods found
	I0830 21:48:03.368090 1009842 system_pods.go:61] "coredns-5dd5756b68-7rg2p" [29f381de-64bf-4485-adb4-935e61de0003] Running
	I0830 21:48:03.368098 1009842 system_pods.go:61] "etcd-functional-540436" [00732980-3d2b-4fa1-9056-1ecbb82851c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0830 21:48:03.368104 1009842 system_pods.go:61] "kindnet-wgctp" [c8716d65-64d3-4833-a4c1-40109e33d25e] Running
	I0830 21:48:03.368111 1009842 system_pods.go:61] "kube-apiserver-functional-540436" [6009a82b-6ebd-43b1-88a5-66e6dc550a19] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0830 21:48:03.368118 1009842 system_pods.go:61] "kube-controller-manager-functional-540436" [ba7a9c2a-7d49-459b-ad78-e0932343a0b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0830 21:48:03.368123 1009842 system_pods.go:61] "kube-proxy-zqwx8" [af8958a4-c256-4ab8-bf6f-65ca6f33eb1d] Running
	I0830 21:48:03.368129 1009842 system_pods.go:61] "kube-scheduler-functional-540436" [275cc4b7-ef73-4286-8d1d-c19bdcda39f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0830 21:48:03.368134 1009842 system_pods.go:61] "storage-provisioner" [1a771d92-1ee8-4f9c-a683-dc6c42158c24] Running
	I0830 21:48:03.368139 1009842 system_pods.go:74] duration metric: took 9.554425ms to wait for pod list to return data ...
	I0830 21:48:03.368145 1009842 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:48:03.371673 1009842 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 21:48:03.371690 1009842 node_conditions.go:123] node cpu capacity is 2
	I0830 21:48:03.371700 1009842 node_conditions.go:105] duration metric: took 3.550921ms to run NodePressure ...
	I0830 21:48:03.371716 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0830 21:48:03.582521 1009842 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0830 21:48:03.588396 1009842 kubeadm.go:787] kubelet initialised
	I0830 21:48:03.588405 1009842 kubeadm.go:788] duration metric: took 5.871361ms waiting for restarted kubelet to initialise ...
	I0830 21:48:03.588412 1009842 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:48:03.599833 1009842 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:03.607209 1009842 pod_ready.go:92] pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:03.607219 1009842 pod_ready.go:81] duration metric: took 7.355593ms waiting for pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:03.607229 1009842 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:05.628904 1009842 pod_ready.go:102] pod "etcd-functional-540436" in "kube-system" namespace has status "Ready":"False"
	I0830 21:48:07.630176 1009842 pod_ready.go:102] pod "etcd-functional-540436" in "kube-system" namespace has status "Ready":"False"
	I0830 21:48:10.130033 1009842 pod_ready.go:92] pod "etcd-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:10.130044 1009842 pod_ready.go:81] duration metric: took 6.522808782s waiting for pod "etcd-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.130058 1009842 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.650819 1009842 pod_ready.go:92] pod "kube-apiserver-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:10.650830 1009842 pod_ready.go:81] duration metric: took 520.766778ms waiting for pod "kube-apiserver-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.650841 1009842 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.656874 1009842 pod_ready.go:92] pod "kube-controller-manager-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:10.656885 1009842 pod_ready.go:81] duration metric: took 6.037794ms waiting for pod "kube-controller-manager-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.656895 1009842 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zqwx8" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.662570 1009842 pod_ready.go:92] pod "kube-proxy-zqwx8" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:10.662581 1009842 pod_ready.go:81] duration metric: took 5.680321ms waiting for pod "kube-proxy-zqwx8" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:10.662590 1009842 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:12.869353 1009842 pod_ready.go:102] pod "kube-scheduler-functional-540436" in "kube-system" namespace has status "Ready":"False"
	I0830 21:48:14.870023 1009842 pod_ready.go:102] pod "kube-scheduler-functional-540436" in "kube-system" namespace has status "Ready":"False"
	I0830 21:48:16.370197 1009842 pod_ready.go:92] pod "kube-scheduler-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:16.370209 1009842 pod_ready.go:81] duration metric: took 5.707612831s waiting for pod "kube-scheduler-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.370218 1009842 pod_ready.go:38] duration metric: took 12.781797879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:48:16.370233 1009842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:48:16.380057 1009842 ops.go:34] apiserver oom_adj: -16
	I0830 21:48:16.380085 1009842 kubeadm.go:640] restartCluster took 23.269153485s
	I0830 21:48:16.380092 1009842 kubeadm.go:406] StartCluster complete in 23.351033447s
	I0830 21:48:16.380108 1009842 settings.go:142] acquiring lock: {Name:mkc3addaaa213f1dd8b8b58d94d3f946bbcb1099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:48:16.380188 1009842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:48:16.380826 1009842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/kubeconfig: {Name:mk735c90eaee551cc7c6cf5c5ad3cfbf98dfe457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:48:16.381058 1009842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:48:16.381367 1009842 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:48:16.381500 1009842 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 21:48:16.381569 1009842 addons.go:69] Setting storage-provisioner=true in profile "functional-540436"
	I0830 21:48:16.381582 1009842 addons.go:231] Setting addon storage-provisioner=true in "functional-540436"
	W0830 21:48:16.381588 1009842 addons.go:240] addon storage-provisioner should already be in state true
	I0830 21:48:16.381643 1009842 host.go:66] Checking if "functional-540436" exists ...
	I0830 21:48:16.381969 1009842 addons.go:69] Setting default-storageclass=true in profile "functional-540436"
	I0830 21:48:16.381983 1009842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-540436"
	I0830 21:48:16.382080 1009842 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
	I0830 21:48:16.382238 1009842 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
	I0830 21:48:16.391088 1009842 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-540436" context rescaled to 1 replicas
	I0830 21:48:16.391115 1009842 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:48:16.397244 1009842 out.go:177] * Verifying Kubernetes components...
	I0830 21:48:16.403800 1009842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:48:16.440411 1009842 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:48:16.442408 1009842 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:48:16.442418 1009842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:48:16.442495 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:48:16.443325 1009842 addons.go:231] Setting addon default-storageclass=true in "functional-540436"
	W0830 21:48:16.443335 1009842 addons.go:240] addon default-storageclass should already be in state true
	I0830 21:48:16.443375 1009842 host.go:66] Checking if "functional-540436" exists ...
	I0830 21:48:16.443801 1009842 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
	I0830 21:48:16.485584 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:48:16.491078 1009842 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:48:16.491090 1009842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:48:16.491156 1009842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
	I0830 21:48:16.517340 1009842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
	I0830 21:48:16.563445 1009842 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0830 21:48:16.563477 1009842 node_ready.go:35] waiting up to 6m0s for node "functional-540436" to be "Ready" ...
	I0830 21:48:16.567225 1009842 node_ready.go:49] node "functional-540436" has status "Ready":"True"
	I0830 21:48:16.567236 1009842 node_ready.go:38] duration metric: took 3.749238ms waiting for node "functional-540436" to be "Ready" ...
	I0830 21:48:16.567245 1009842 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:48:16.578482 1009842 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.585211 1009842 pod_ready.go:92] pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:16.585221 1009842 pod_ready.go:81] duration metric: took 6.725555ms waiting for pod "coredns-5dd5756b68-7rg2p" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.585231 1009842 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.596338 1009842 pod_ready.go:92] pod "etcd-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:16.596348 1009842 pod_ready.go:81] duration metric: took 11.11169ms waiting for pod "etcd-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.596361 1009842 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.651354 1009842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:48:16.666125 1009842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:48:16.962070 1009842 pod_ready.go:92] pod "kube-apiserver-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:16.962081 1009842 pod_ready.go:81] duration metric: took 365.714568ms waiting for pod "kube-apiserver-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:16.962091 1009842 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:17.077842 1009842 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 21:48:17.079537 1009842 addons.go:502] enable addons completed in 698.029875ms: enabled=[storage-provisioner default-storageclass]
	I0830 21:48:17.363116 1009842 pod_ready.go:92] pod "kube-controller-manager-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:17.363130 1009842 pod_ready.go:81] duration metric: took 401.033572ms waiting for pod "kube-controller-manager-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:17.363140 1009842 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqwx8" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:17.762754 1009842 pod_ready.go:92] pod "kube-proxy-zqwx8" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:17.762764 1009842 pod_ready.go:81] duration metric: took 399.618329ms waiting for pod "kube-proxy-zqwx8" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:17.762773 1009842 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:18.162398 1009842 pod_ready.go:92] pod "kube-scheduler-functional-540436" in "kube-system" namespace has status "Ready":"True"
	I0830 21:48:18.162408 1009842 pod_ready.go:81] duration metric: took 399.62916ms waiting for pod "kube-scheduler-functional-540436" in "kube-system" namespace to be "Ready" ...
	I0830 21:48:18.162418 1009842 pod_ready.go:38] duration metric: took 1.595164625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:48:18.162430 1009842 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:48:18.162487 1009842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:48:18.175668 1009842 api_server.go:72] duration metric: took 1.784524601s to wait for apiserver process to appear ...
	I0830 21:48:18.175681 1009842 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:48:18.175697 1009842 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0830 21:48:18.185758 1009842 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0830 21:48:18.187037 1009842 api_server.go:141] control plane version: v1.28.1
	I0830 21:48:18.187048 1009842 api_server.go:131] duration metric: took 11.36303ms to wait for apiserver health ...
	I0830 21:48:18.187055 1009842 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:48:18.366073 1009842 system_pods.go:59] 8 kube-system pods found
	I0830 21:48:18.366087 1009842 system_pods.go:61] "coredns-5dd5756b68-7rg2p" [29f381de-64bf-4485-adb4-935e61de0003] Running
	I0830 21:48:18.366091 1009842 system_pods.go:61] "etcd-functional-540436" [00732980-3d2b-4fa1-9056-1ecbb82851c1] Running
	I0830 21:48:18.366095 1009842 system_pods.go:61] "kindnet-wgctp" [c8716d65-64d3-4833-a4c1-40109e33d25e] Running
	I0830 21:48:18.366101 1009842 system_pods.go:61] "kube-apiserver-functional-540436" [6009a82b-6ebd-43b1-88a5-66e6dc550a19] Running
	I0830 21:48:18.366105 1009842 system_pods.go:61] "kube-controller-manager-functional-540436" [ba7a9c2a-7d49-459b-ad78-e0932343a0b1] Running
	I0830 21:48:18.366109 1009842 system_pods.go:61] "kube-proxy-zqwx8" [af8958a4-c256-4ab8-bf6f-65ca6f33eb1d] Running
	I0830 21:48:18.366113 1009842 system_pods.go:61] "kube-scheduler-functional-540436" [275cc4b7-ef73-4286-8d1d-c19bdcda39f8] Running
	I0830 21:48:18.366118 1009842 system_pods.go:61] "storage-provisioner" [1a771d92-1ee8-4f9c-a683-dc6c42158c24] Running
	I0830 21:48:18.366123 1009842 system_pods.go:74] duration metric: took 179.063423ms to wait for pod list to return data ...
	I0830 21:48:18.366130 1009842 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:48:18.562600 1009842 default_sa.go:45] found service account: "default"
	I0830 21:48:18.562613 1009842 default_sa.go:55] duration metric: took 196.478391ms for default service account to be created ...
	I0830 21:48:18.562622 1009842 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:48:18.765231 1009842 system_pods.go:86] 8 kube-system pods found
	I0830 21:48:18.765246 1009842 system_pods.go:89] "coredns-5dd5756b68-7rg2p" [29f381de-64bf-4485-adb4-935e61de0003] Running
	I0830 21:48:18.765251 1009842 system_pods.go:89] "etcd-functional-540436" [00732980-3d2b-4fa1-9056-1ecbb82851c1] Running
	I0830 21:48:18.765256 1009842 system_pods.go:89] "kindnet-wgctp" [c8716d65-64d3-4833-a4c1-40109e33d25e] Running
	I0830 21:48:18.765260 1009842 system_pods.go:89] "kube-apiserver-functional-540436" [6009a82b-6ebd-43b1-88a5-66e6dc550a19] Running
	I0830 21:48:18.765264 1009842 system_pods.go:89] "kube-controller-manager-functional-540436" [ba7a9c2a-7d49-459b-ad78-e0932343a0b1] Running
	I0830 21:48:18.765268 1009842 system_pods.go:89] "kube-proxy-zqwx8" [af8958a4-c256-4ab8-bf6f-65ca6f33eb1d] Running
	I0830 21:48:18.765272 1009842 system_pods.go:89] "kube-scheduler-functional-540436" [275cc4b7-ef73-4286-8d1d-c19bdcda39f8] Running
	I0830 21:48:18.765276 1009842 system_pods.go:89] "storage-provisioner" [1a771d92-1ee8-4f9c-a683-dc6c42158c24] Running
	I0830 21:48:18.765281 1009842 system_pods.go:126] duration metric: took 202.655868ms to wait for k8s-apps to be running ...
	I0830 21:48:18.765287 1009842 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:48:18.765343 1009842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:48:18.779127 1009842 system_svc.go:56] duration metric: took 13.828692ms WaitForService to wait for kubelet.
	I0830 21:48:18.779143 1009842 kubeadm.go:581] duration metric: took 2.388006394s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:48:18.779161 1009842 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:48:18.962550 1009842 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 21:48:18.962564 1009842 node_conditions.go:123] node cpu capacity is 2
	I0830 21:48:18.962573 1009842 node_conditions.go:105] duration metric: took 183.408573ms to run NodePressure ...
	I0830 21:48:18.962583 1009842 start.go:228] waiting for startup goroutines ...
	I0830 21:48:18.962589 1009842 start.go:233] waiting for cluster config update ...
	I0830 21:48:18.962597 1009842 start.go:242] writing updated cluster config ...
	I0830 21:48:18.962895 1009842 ssh_runner.go:195] Run: rm -f paused
	I0830 21:48:19.025487 1009842 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 21:48:19.029406 1009842 out.go:177] * Done! kubectl is now configured to use "functional-540436" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 30 21:50:23 functional-540436 crio[4456]: time="2023-08-30 21:50:23.419044060Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Aug 30 21:50:31 functional-540436 crio[4456]: time="2023-08-30 21:50:31.416193795Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ce50ad00-d327-4504-9ceb-f79240e7b15f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:31 functional-540436 crio[4456]: time="2023-08-30 21:50:31.416413544Z" level=info msg="Image docker.io/nginx:latest not found" id=ce50ad00-d327-4504-9ceb-f79240e7b15f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:46 functional-540436 crio[4456]: time="2023-08-30 21:50:46.416427542Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7f6693be-a854-427c-8861-d6a0b6ccebe2 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:46 functional-540436 crio[4456]: time="2023-08-30 21:50:46.416660494Z" level=info msg="Image docker.io/nginx:latest not found" id=7f6693be-a854-427c-8861-d6a0b6ccebe2 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:57 functional-540436 crio[4456]: time="2023-08-30 21:50:57.416484986Z" level=info msg="Checking image status: docker.io/nginx:latest" id=0ca92eb8-b2a3-4651-9f0a-287e90165d8c name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:57 functional-540436 crio[4456]: time="2023-08-30 21:50:57.416723279Z" level=info msg="Image docker.io/nginx:latest not found" id=0ca92eb8-b2a3-4651-9f0a-287e90165d8c name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:57 functional-540436 crio[4456]: time="2023-08-30 21:50:57.417505834Z" level=info msg="Pulling image: docker.io/nginx:latest" id=752a4d05-7351-42c7-997b-f30681429566 name=/runtime.v1.ImageService/PullImage
	Aug 30 21:50:57 functional-540436 crio[4456]: time="2023-08-30 21:50:57.420003571Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Aug 30 21:50:58 functional-540436 crio[4456]: time="2023-08-30 21:50:58.415947846Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b5189796-5d85-4672-9ba0-54e2442ad2b6 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:50:58 functional-540436 crio[4456]: time="2023-08-30 21:50:58.416200686Z" level=info msg="Image docker.io/nginx:alpine not found" id=b5189796-5d85-4672-9ba0-54e2442ad2b6 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:09 functional-540436 crio[4456]: time="2023-08-30 21:51:09.415640570Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3722d17a-25be-4045-b37f-7a7acb84a6c3 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:09 functional-540436 crio[4456]: time="2023-08-30 21:51:09.415883712Z" level=info msg="Image docker.io/nginx:alpine not found" id=3722d17a-25be-4045-b37f-7a7acb84a6c3 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:22 functional-540436 crio[4456]: time="2023-08-30 21:51:22.416316683Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0d0975a1-a024-4104-8b41-af49183b8634 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:22 functional-540436 crio[4456]: time="2023-08-30 21:51:22.416561343Z" level=info msg="Image docker.io/nginx:alpine not found" id=0d0975a1-a024-4104-8b41-af49183b8634 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:22 functional-540436 crio[4456]: time="2023-08-30 21:51:22.416564691Z" level=info msg="Checking image status: docker.io/nginx:latest" id=3eb0811b-5455-4f4b-9cbb-946b58d88c9f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:22 functional-540436 crio[4456]: time="2023-08-30 21:51:22.416749101Z" level=info msg="Image docker.io/nginx:latest not found" id=3eb0811b-5455-4f4b-9cbb-946b58d88c9f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:35 functional-540436 crio[4456]: time="2023-08-30 21:51:35.415888369Z" level=info msg="Checking image status: docker.io/nginx:latest" id=42a13408-8a3a-4627-a726-7ecf99f426a9 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:35 functional-540436 crio[4456]: time="2023-08-30 21:51:35.416114329Z" level=info msg="Image docker.io/nginx:latest not found" id=42a13408-8a3a-4627-a726-7ecf99f426a9 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:36 functional-540436 crio[4456]: time="2023-08-30 21:51:36.416247436Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=79735c54-7daa-4830-84e9-9d9925e73d0f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:36 functional-540436 crio[4456]: time="2023-08-30 21:51:36.416476852Z" level=info msg="Image docker.io/nginx:alpine not found" id=79735c54-7daa-4830-84e9-9d9925e73d0f name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:50 functional-540436 crio[4456]: time="2023-08-30 21:51:50.415888507Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e534f235-71ec-4d1a-acc6-af0787e5a51a name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:50 functional-540436 crio[4456]: time="2023-08-30 21:51:50.416047973Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9d9b1b9d-5667-4468-b0f0-619008227b98 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:50 functional-540436 crio[4456]: time="2023-08-30 21:51:50.416119825Z" level=info msg="Image docker.io/nginx:alpine not found" id=e534f235-71ec-4d1a-acc6-af0787e5a51a name=/runtime.v1.ImageService/ImageStatus
	Aug 30 21:51:50 functional-540436 crio[4456]: time="2023-08-30 21:51:50.416280416Z" level=info msg="Image docker.io/nginx:latest not found" id=9d9b1b9d-5667-4468-b0f0-619008227b98 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                    CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	183f0241db18d       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5   3 minutes ago       Running             echoserver-arm            0                   a2ec0d7606974       hello-node-759d89bdcc-l9z4w
	77e91156cd04f       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                         3 minutes ago       Running             kindnet-cni               3                   be49a41b32e90       kindnet-wgctp
	9d2e785597d32       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                         3 minutes ago       Running             coredns                   3                   149e3db87f360       coredns-5dd5756b68-7rg2p
	1ef2d71e3432f       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                         3 minutes ago       Running             kube-proxy                3                   4b656cb010ffb       kube-proxy-zqwx8
	eb7fa704ac30e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         3 minutes ago       Running             storage-provisioner       3                   6260367c5e51b       storage-provisioner
	317e065cbe0ff       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                         4 minutes ago       Running             kube-apiserver            0                   452b87a0b4c0b       kube-apiserver-functional-540436
	6b4202f9ed31e       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                         4 minutes ago       Running             kube-controller-manager   3                   948dd49e354a9       kube-controller-manager-functional-540436
	06662bc4ef899       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                         4 minutes ago       Running             kube-scheduler            3                   9b9c8a326172a       kube-scheduler-functional-540436
	a9f9ee2be4cdd       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                         4 minutes ago       Running             etcd                      3                   26f6cb73107d6       etcd-functional-540436
	9f3a49a281115       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                         4 minutes ago       Exited              coredns                   2                   149e3db87f360       coredns-5dd5756b68-7rg2p
	37db3648f5fec       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                         4 minutes ago       Exited              kube-proxy                2                   4b656cb010ffb       kube-proxy-zqwx8
	0fb26268fc2be       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                         4 minutes ago       Exited              kube-controller-manager   2                   948dd49e354a9       kube-controller-manager-functional-540436
	80169f5469464       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         4 minutes ago       Exited              storage-provisioner       2                   6260367c5e51b       storage-provisioner
	76532378b7346       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                         4 minutes ago       Exited              kube-scheduler            2                   9b9c8a326172a       kube-scheduler-functional-540436
	858ea4b3ef1b8       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                         4 minutes ago       Exited              kindnet-cni               2                   be49a41b32e90       kindnet-wgctp
	e36a129cb5f42       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                         4 minutes ago       Exited              etcd                      2                   26f6cb73107d6       etcd-functional-540436
	
	* 
	* ==> coredns [9d2e785597d324c40fc8e8fb079d4a9925e0f8c12a432f99a697ff14351a5f01] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34908 - 27960 "HINFO IN 6780370554265342836.765928737794228311. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025603545s
	
	* 
	* ==> coredns [9f3a49a281115091089993192d603289c23ceb391466df05198fde723378faab] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43057 - 24959 "HINFO IN 5825801645206865823.3816059843195738504. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085968727s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-540436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-540436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
	                    minikube.k8s.io/name=functional-540436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_46_05_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:46:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-540436
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:51:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:49:02 +0000   Wed, 30 Aug 2023 21:45:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:49:02 +0000   Wed, 30 Aug 2023 21:45:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:49:02 +0000   Wed, 30 Aug 2023 21:45:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:49:02 +0000   Wed, 30 Aug 2023 21:46:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-540436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	System Info:
	  Machine ID:                 c26707d99a634cb79e1f123a7750e534
	  System UUID:                dedc4ab8-7799-4947-8ff6-6a7950d48ed1
	  Boot ID:                    98673563-8173-4281-afb4-eac1dfafdc23
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-l9z4w                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-7rg2p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m42s
	  kube-system                 etcd-functional-540436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kindnet-wgctp                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m43s
	  kube-system                 kube-apiserver-functional-540436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-functional-540436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-zqwx8                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-functional-540436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 3m57s                kube-proxy       
	  Normal   Starting                 4m41s                kube-proxy       
	  Normal   Starting                 5m40s                kube-proxy       
	  Normal   Starting                 5m55s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m55s                kubelet          Node functional-540436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m55s                kubelet          Node functional-540436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m55s                kubelet          Node functional-540436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m43s                node-controller  Node functional-540436 event: Registered Node functional-540436 in Controller
	  Normal   NodeReady                5m11s                kubelet          Node functional-540436 status is now: NodeReady
	  Warning  ContainerGCFailed        4m55s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m29s                node-controller  Node functional-540436 event: Registered Node functional-540436 in Controller
	  Normal   Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node functional-540436 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node functional-540436 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m3s (x8 over 4m3s)  kubelet          Node functional-540436 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m46s                node-controller  Node functional-540436 event: Registered Node functional-540436 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001065] FS-Cache: O-key=[8] 'fe3d5c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=000000006547550d
	[  +0.001035] FS-Cache: N-key=[8] 'fe3d5c0100000000'
	[  +2.727104] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=000000006d053276
	[  +0.001146] FS-Cache: O-key=[8] 'fd3d5c0100000000'
	[  +0.000716] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000960] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=0000000095e2f235
	[  +0.001052] FS-Cache: N-key=[8] 'fd3d5c0100000000'
	[  +0.378288] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001124] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000c68a6716
	[  +0.001196] FS-Cache: O-key=[8] '033e5c0100000000'
	[  +0.000794] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000ec866464
	[  +0.001128] FS-Cache: N-key=[8] '033e5c0100000000'
	[  +3.661121] FS-Cache: Duplicate cookie detected
	[  +0.000719] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000ee697b3f{9P.session} n=000000007881e0f6
	[  +0.001093] FS-Cache: O-key=[10] '34323939363835373037'
	[  +0.000828] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000ee697b3f{9P.session} n=0000000088fb1c8a
	[  +0.001140] FS-Cache: N-key=[10] '34323939363835373037'
	
	* 
	* ==> etcd [a9f9ee2be4cddb60fbc596ff7f6a64c2d1be9da7da582010c914e71b12693f19] <==
	* {"level":"info","ts":"2023-08-30T21:47:57.577161Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T21:47:57.57728Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T21:47:57.57758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-30T21:47:57.57768Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-30T21:47:57.577819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:47:57.577876Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:47:57.593324Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T21:47:57.594231Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T21:47:57.594273Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T21:47:57.594389Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-30T21:47:57.594403Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-30T21:47:58.681185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:58.68124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:58.681266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:58.68128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2023-08-30T21:47:58.681287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2023-08-30T21:47:58.681297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2023-08-30T21:47:58.681304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2023-08-30T21:47:58.685386Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-540436 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T21:47:58.685429Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:47:58.685476Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:47:58.686731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T21:47:58.685581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T21:47:58.689155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T21:47:58.718123Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	* 
	* ==> etcd [e36a129cb5f42c05a11f77487d426631bc269934d12ff17fb21381cee712ad01] <==
	* {"level":"info","ts":"2023-08-30T21:47:13.235027Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T21:47:14.293113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-30T21:47:14.293166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-30T21:47:14.293193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-08-30T21:47:14.293206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:14.293212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:14.293222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:14.293234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-08-30T21:47:14.29574Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-540436 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T21:47:14.2959Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:47:14.296904Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-30T21:47:14.296961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T21:47:14.304168Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T21:47:14.309181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T21:47:14.309219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T21:47:44.941091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-30T21:47:44.94116Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-540436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-08-30T21:47:44.941306Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-30T21:47:44.941389Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-30T21:47:44.990447Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-08-30T21:47:44.990636Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-08-30T21:47:44.99072Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-08-30T21:47:44.992919Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-30T21:47:44.993096Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-30T21:47:44.99334Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-540436","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:51:59 up  6:34,  0 users,  load average: 0.11, 0.95, 1.65
	Linux functional-540436 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [77e91156cd04ff7b02e36c954d39ad413dc26f84608d89ec183b3da772e07f40] <==
	* I0830 21:49:52.506129       1 main.go:227] handling current node
	I0830 21:50:02.577217       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:02.577243       1 main.go:227] handling current node
	I0830 21:50:12.582684       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:12.582715       1 main.go:227] handling current node
	I0830 21:50:22.591002       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:22.591030       1 main.go:227] handling current node
	I0830 21:50:32.600822       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:32.600851       1 main.go:227] handling current node
	I0830 21:50:42.613440       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:42.613469       1 main.go:227] handling current node
	I0830 21:50:52.624821       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:50:52.624846       1 main.go:227] handling current node
	I0830 21:51:02.628717       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:02.628840       1 main.go:227] handling current node
	I0830 21:51:12.640623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:12.640665       1 main.go:227] handling current node
	I0830 21:51:22.652525       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:22.652553       1 main.go:227] handling current node
	I0830 21:51:32.663250       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:32.663280       1 main.go:227] handling current node
	I0830 21:51:42.675752       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:42.675782       1 main.go:227] handling current node
	I0830 21:51:52.688613       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:51:52.688648       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [858ea4b3ef1b8c580bf79ee90e29a3aad5dc0da7e4e6169eeaa750b179e42c99] <==
	* I0830 21:47:13.173304       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0830 21:47:13.173553       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0830 21:47:13.173737       1 main.go:116] setting mtu 1500 for CNI 
	I0830 21:47:13.173779       1 main.go:146] kindnetd IP family: "ipv4"
	I0830 21:47:13.173821       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0830 21:47:17.648307       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:47:17.650615       1 main.go:227] handling current node
	I0830 21:47:27.667294       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:47:27.667423       1 main.go:227] handling current node
	I0830 21:47:37.677447       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:47:37.677474       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [317e065cbe0fff92d0a25e6aea87db1694dd5494634dcf574a8602512fffbee8] <==
	* I0830 21:48:01.315249       1 shared_informer.go:318] Caches are synced for configmaps
	I0830 21:48:01.319437       1 aggregator.go:166] initial CRD sync complete...
	I0830 21:48:01.320101       1 autoregister_controller.go:141] Starting autoregister controller
	I0830 21:48:01.320157       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0830 21:48:01.320194       1 cache.go:39] Caches are synced for autoregister controller
	I0830 21:48:01.368668       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 21:48:01.369307       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 21:48:01.372200       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 21:48:01.372867       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 21:48:01.374001       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0830 21:48:01.383762       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0830 21:48:01.397716       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0830 21:48:02.090669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 21:48:03.350343       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0830 21:48:03.483412       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0830 21:48:03.492930       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0830 21:48:03.562250       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 21:48:03.570966       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 21:48:19.597351       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 21:48:23.242913       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.73.204"}
	I0830 21:48:23.266912       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0830 21:48:27.619350       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x40090a19b0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x4008fcd8b0), ResponseWriter:(*httpsnoop.rw)(0x4008fcd8b0), Flusher:(*httpsnoop.rw)(0x4008fcd8b0), CloseNotifier:(*httpsnoop.rw)(0x4008fcd8b0), Pusher:(*httpsnoop.rw)(0x4008fcd8b0)}}, encoder:(*versioning.codec)(0x40090cc500), memAllocator:(*runtime.Allocator)(0x40090ce828)})
	I0830 21:48:32.157823       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0830 21:48:32.349040       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.129.116"}
	I0830 21:48:48.119513       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.138.29"}
	
	* 
	* ==> kube-controller-manager [0fb26268fc2be4c8676056c269b09db35ccdf56a15e263b594fd468075727f41] <==
	* I0830 21:47:30.611748       1 shared_informer.go:318] Caches are synced for PV protection
	I0830 21:47:30.612057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.169415ms"
	I0830 21:47:30.612219       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.864µs"
	I0830 21:47:30.612272       1 shared_informer.go:318] Caches are synced for node
	I0830 21:47:30.612341       1 range_allocator.go:174] "Sending events to api server"
	I0830 21:47:30.612399       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0830 21:47:30.612429       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0830 21:47:30.612457       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0830 21:47:30.613634       1 shared_informer.go:318] Caches are synced for ephemeral
	I0830 21:47:30.619953       1 shared_informer.go:318] Caches are synced for expand
	I0830 21:47:30.624205       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0830 21:47:30.626967       1 shared_informer.go:318] Caches are synced for PVC protection
	I0830 21:47:30.632336       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0830 21:47:30.642441       1 shared_informer.go:318] Caches are synced for GC
	I0830 21:47:30.668541       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 21:47:30.692061       1 shared_informer.go:318] Caches are synced for endpoint
	I0830 21:47:30.695447       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 21:47:30.717485       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0830 21:47:30.740976       1 shared_informer.go:318] Caches are synced for persistent volume
	I0830 21:47:31.144462       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 21:47:31.188647       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 21:47:31.188693       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0830 21:47:32.797796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.772µs"
	I0830 21:47:32.821950       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.816085ms"
	I0830 21:47:32.822180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.958µs"
	
	* 
	* ==> kube-controller-manager [6b4202f9ed31ec3b5d437114266dfea0eadc9afd0f6e7173ce92dc579d6e87b0] <==
	* I0830 21:48:13.700714       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0830 21:48:13.700905       1 event.go:307] "Event occurred" object="functional-540436" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-540436 event: Registered Node functional-540436 in Controller"
	I0830 21:48:13.703583       1 shared_informer.go:318] Caches are synced for PVC protection
	I0830 21:48:13.703612       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0830 21:48:13.705426       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0830 21:48:13.708165       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0830 21:48:13.713217       1 shared_informer.go:318] Caches are synced for persistent volume
	I0830 21:48:13.727062       1 shared_informer.go:318] Caches are synced for HPA
	I0830 21:48:13.734372       1 shared_informer.go:318] Caches are synced for stateful set
	I0830 21:48:13.795215       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 21:48:13.826905       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 21:48:13.843131       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0830 21:48:13.868355       1 shared_informer.go:318] Caches are synced for cronjob
	I0830 21:48:13.877843       1 shared_informer.go:318] Caches are synced for job
	I0830 21:48:14.196458       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 21:48:14.196492       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0830 21:48:14.244044       1 shared_informer.go:318] Caches are synced for garbage collector
	I0830 21:48:32.162262       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0830 21:48:32.211771       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-l9z4w"
	I0830 21:48:32.247801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="86.906739ms"
	I0830 21:48:32.279721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="31.40399ms"
	I0830 21:48:32.279995       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="40.927µs"
	I0830 21:48:37.764298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="19.24851ms"
	I0830 21:48:37.764386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="35.503µs"
	I0830 21:48:56.940869       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-proxy [1ef2d71e3432f23b159252d9c87886087513ff933dad98b3c975b394fb475a81] <==
	* I0830 21:48:02.011443       1 server_others.go:69] "Using iptables proxy"
	I0830 21:48:02.045499       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0830 21:48:02.259422       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0830 21:48:02.262802       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:48:02.262868       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0830 21:48:02.262880       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0830 21:48:02.262965       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:48:02.263186       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:48:02.263205       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:48:02.270372       1 config.go:188] "Starting service config controller"
	I0830 21:48:02.270423       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:48:02.271262       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:48:02.271279       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:48:02.271662       1 config.go:315] "Starting node config controller"
	I0830 21:48:02.271679       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:48:02.373269       1 shared_informer.go:318] Caches are synced for node config
	I0830 21:48:02.373309       1 shared_informer.go:318] Caches are synced for service config
	I0830 21:48:02.373338       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [37db3648f5fec722d6d1f38820f48e575df63e352c75b2a97b12a9b99b2c78d0] <==
	* I0830 21:47:15.141121       1 server_others.go:69] "Using iptables proxy"
	I0830 21:47:17.676550       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0830 21:47:17.757691       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0830 21:47:17.761259       1 server_others.go:152] "Using iptables Proxier"
	I0830 21:47:17.761394       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0830 21:47:17.761432       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0830 21:47:17.761745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 21:47:17.762295       1 server.go:846] "Version info" version="v1.28.1"
	I0830 21:47:17.762714       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:47:17.764508       1 config.go:188] "Starting service config controller"
	I0830 21:47:17.764765       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 21:47:17.764848       1 config.go:97] "Starting endpoint slice config controller"
	I0830 21:47:17.764880       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 21:47:17.766164       1 config.go:315] "Starting node config controller"
	I0830 21:47:17.766222       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 21:47:17.866552       1 shared_informer.go:318] Caches are synced for node config
	I0830 21:47:17.866689       1 shared_informer.go:318] Caches are synced for service config
	I0830 21:47:17.866705       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [06662bc4ef899b3e61ff4aa91a6ae89c69a1fb16d7124db71856a4a26e86a4ea] <==
	* I0830 21:48:01.283277       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:48:01.283332       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0830 21:48:01.305845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0830 21:48:01.305961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0830 21:48:01.306086       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:48:01.306136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 21:48:01.306236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:48:01.306277       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0830 21:48:01.306405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 21:48:01.306449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 21:48:01.306549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:48:01.306605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 21:48:01.306709       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 21:48:01.306754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 21:48:01.306850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:48:01.306901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 21:48:01.307085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 21:48:01.307143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 21:48:01.307248       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:48:01.307299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 21:48:01.307397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 21:48:01.307446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0830 21:48:01.307493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:48:01.307546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0830 21:48:01.385396       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [76532378b7346528b0a889f9678644197754853db8e9ffd95176f43b46cdfbe8] <==
	* I0830 21:47:15.743178       1 serving.go:348] Generated self-signed cert in-memory
	W0830 21:47:17.491076       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0830 21:47:17.491383       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 21:47:17.491443       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0830 21:47:17.491491       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 21:47:17.601846       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0830 21:47:17.601964       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 21:47:17.604325       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0830 21:47:17.609314       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0830 21:47:17.609400       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0830 21:47:17.612535       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:47:17.713647       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:47:44.939048       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0830 21:47:44.939092       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0830 21:47:44.939317       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0830 21:47:44.939829       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Aug 30 21:51:09 functional-540436 kubelet[4724]: E0830 21:51:09.416169    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="a6087e1a-7a1a-4535-99da-d59bcb12f8eb"
	Aug 30 21:51:22 functional-540436 kubelet[4724]: E0830 21:51:22.416956    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="87c99d6f-d364-47fa-813a-ea2f2e5c3712"
	Aug 30 21:51:22 functional-540436 kubelet[4724]: E0830 21:51:22.417033    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="a6087e1a-7a1a-4535-99da-d59bcb12f8eb"
	Aug 30 21:51:35 functional-540436 kubelet[4724]: E0830 21:51:35.416358    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="87c99d6f-d364-47fa-813a-ea2f2e5c3712"
	Aug 30 21:51:36 functional-540436 kubelet[4724]: E0830 21:51:36.417430    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="a6087e1a-7a1a-4535-99da-d59bcb12f8eb"
	Aug 30 21:51:50 functional-540436 kubelet[4724]: E0830 21:51:50.416663    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="87c99d6f-d364-47fa-813a-ea2f2e5c3712"
	Aug 30 21:51:50 functional-540436 kubelet[4724]: E0830 21:51:50.416985    4724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="a6087e1a-7a1a-4535-99da-d59bcb12f8eb"
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586087    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a: Error finding container 26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a: Status 404 returned error can't find the container with id 26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586284    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4: Error finding container 948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4: Status 404 returned error can't find the container with id 948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586446    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437: Error finding container 149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437: Status 404 returned error can't find the container with id 149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586583    4724 manager.go:1106] Failed to create existing container: /crio-948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4: Error finding container 948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4: Status 404 returned error can't find the container with id 948dd49e354a9472ff73c54d5ad5cf015e8922822da44909f702cb54a319e8f4
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586726    4724 manager.go:1106] Failed to create existing container: /crio-021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65: Error finding container 021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65: Status 404 returned error can't find the container with id 021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.586885    4724 manager.go:1106] Failed to create existing container: /crio-26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a: Error finding container 26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a: Status 404 returned error can't find the container with id 26f6cb73107d643d787d1f6470f17f22a4b7d9f35d0792b91256ac572334387a
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.587052    4724 manager.go:1106] Failed to create existing container: /crio-6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938: Error finding container 6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938: Status 404 returned error can't find the container with id 6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.587284    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1: Error finding container 4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1: Status 404 returned error can't find the container with id 4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.587603    4724 manager.go:1106] Failed to create existing container: /crio-149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437: Error finding container 149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437: Status 404 returned error can't find the container with id 149e3db87f360686081a7fe14161211d3df36ceec245613daf85c644ad6db437
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.587804    4724 manager.go:1106] Failed to create existing container: /crio-66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81: Error finding container 66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81: Status 404 returned error can't find the container with id 66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588028    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b: Error finding container be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b: Status 404 returned error can't find the container with id be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588248    4724 manager.go:1106] Failed to create existing container: /crio-4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1: Error finding container 4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1: Status 404 returned error can't find the container with id 4b656cb010ffb4c63d0c30c55f647627092eb0f7726adb39794d6f0fb53e65b1
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588470    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81: Error finding container 66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81: Status 404 returned error can't find the container with id 66521cdcdf4b6ac5b42c1f0db695faaca638c2c1eb2d86b4d080b9c0a234de81
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588679    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65: Error finding container 021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65: Status 404 returned error can't find the container with id 021cf8ae31a7425ef3da8f9c10283a61aa1aae4e9bcd776c14db372990ac1b65
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588841    4724 manager.go:1106] Failed to create existing container: /crio-be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b: Error finding container be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b: Status 404 returned error can't find the container with id be49a41b32e90f7ff4f78d27469486879e7295578177d0272d5de36aaa64920b
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.588998    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938: Error finding container 6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938: Status 404 returned error can't find the container with id 6260367c5e51b1ba52ab3315dd2a438a7f7b349abda7fcadf0c4bf8456ff7938
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.589198    4724 manager.go:1106] Failed to create existing container: /crio-9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb: Error finding container 9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb: Status 404 returned error can't find the container with id 9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb
	Aug 30 21:51:56 functional-540436 kubelet[4724]: E0830 21:51:56.589377    4724 manager.go:1106] Failed to create existing container: /docker/e181e8543ebef24ea4f19b0b5104675634648e809a55d154199a454e1f4061cd/crio-9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb: Error finding container 9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb: Status 404 returned error can't find the container with id 9b9c8a326172ac29f3095287a62332b7062dce954ae1c0542dcc0a1aa7a723eb
	
	* 
	* ==> storage-provisioner [80169f5469464e5ed2fb1d5f578e7ba8acd0eb75d45ccd91ec9a3d6a934a9411] <==
	* I0830 21:47:14.104276       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:47:17.711009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:47:17.711431       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:47:35.126374       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:47:35.126859       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"476cb80e-df70-4597-a9cc-5cfed18aacbd", APIVersion:"v1", ResourceVersion:"533", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-540436_902bc2ad-62c0-4991-9695-0732ecf794d9 became leader
	I0830 21:47:35.126953       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-540436_902bc2ad-62c0-4991-9695-0732ecf794d9!
	I0830 21:47:35.227898       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-540436_902bc2ad-62c0-4991-9695-0732ecf794d9!
	
	* 
	* ==> storage-provisioner [eb7fa704ac30eb071591fd4d9a83797130faecfed1763fa748fa5fc11acf77b9] <==
	* I0830 21:48:02.142747       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:48:02.185492       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:48:02.185681       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:48:19.600734       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:48:19.600921       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-540436_596bf2f9-d6a2-4e81-a906-61419207fe92!
	I0830 21:48:19.609494       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"476cb80e-df70-4597-a9cc-5cfed18aacbd", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-540436_596bf2f9-d6a2-4e81-a906-61419207fe92 became leader
	I0830 21:48:19.701124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-540436_596bf2f9-d6a2-4e81-a906-61419207fe92!
	I0830 21:48:56.943148       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0830 21:48:56.944323       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2ef38d84-0fdf-45f9-a679-ec09f2582094", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0830 21:48:56.943278       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4159c3bd-7175-464e-bed3-44273374c41a 371 0 2023-08-30 21:46:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-08-30 21:46:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2ef38d84-0fdf-45f9-a679-ec09f2582094 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2ef38d84-0fdf-45f9-a679-ec09f2582094 730 0 2023-08-30 21:48:56 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-08-30 21:48:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-08-30 21:48:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0830 21:48:56.947911       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2ef38d84-0fdf-45f9-a679-ec09f2582094" provisioned
	I0830 21:48:56.947944       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0830 21:48:56.947959       1 volume_store.go:212] Trying to save persistentvolume "pvc-2ef38d84-0fdf-45f9-a679-ec09f2582094"
	I0830 21:48:57.086009       1 volume_store.go:219] persistentvolume "pvc-2ef38d84-0fdf-45f9-a679-ec09f2582094" saved
	I0830 21:48:57.087256       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2ef38d84-0fdf-45f9-a679-ec09f2582094", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2ef38d84-0fdf-45f9-a679-ec09f2582094
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-540436 -n functional-540436
helpers_test.go:261: (dbg) Run:  kubectl --context functional-540436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-540436 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-540436 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-540436/192.168.49.2
	Start Time:       Wed, 30 Aug 2023 21:48:48 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xb6cx (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-xb6cx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m13s                default-scheduler  Successfully assigned default/nginx-svc to functional-540436
	  Warning  Failed     3m3s                 kubelet            Failed to pull image "docker.io/nginx:alpine": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": read tcp 192.168.49.2:47324->34.205.13.154:443: read: connection reset by peer
	  Warning  Failed     2m52s                kubelet            Failed to pull image "docker.io/nginx:alpine": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": read tcp 192.168.49.2:56598->44.205.64.79:443: read: connection reset by peer
	  Warning  Failed     2m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": Get "https://registry-1.docker.io/v2/library/nginx/manifests/alpine": read tcp 192.168.49.2:35526->44.205.64.79:443: read: connection reset by peer
	  Normal   Pulling    98s (x4 over 3m13s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     78s (x4 over 3m3s)   kubelet            Error: ErrImagePull
	  Warning  Failed     78s                  kubelet            Failed to pull image "docker.io/nginx:alpine": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": net/http: TLS handshake timeout
	  Warning  Failed     63s (x6 over 3m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    52s (x7 over 3m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-540436/192.168.49.2
	Start Time:       Wed, 30 Aug 2023 21:48:57 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2c7nd (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-2c7nd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-540436
	  Warning  Failed     2m29s                kubelet            Failed to pull image "docker.io/nginx": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": net/http: TLS handshake timeout
	  Warning  Failed     115s                 kubelet            Failed to pull image "docker.io/nginx": pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": read tcp 192.168.49.2:53552->44.205.64.79:443: read: connection reset by peer
	  Normal   Pulling    64s (x4 over 3m4s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     54s (x2 over 2m53s)  kubelet            Failed to pull image "docker.io/nginx": Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fnginx%3Apull&service=registry.docker.io": net/http: TLS handshake timeout
	  Warning  Failed     54s (x4 over 2m53s)  kubelet            Error: ErrImagePull
	  Warning  Failed     39s (x6 over 2m53s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    26s (x7 over 2m53s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-855931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-855931 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.58200752s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-855931 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-855931 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b74c8aa1-39c8-4505-9855-bd59cf5412ee] Pending
helpers_test.go:344: "nginx" [b74c8aa1-39c8-4505-9855-bd59cf5412ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b74c8aa1-39c8-4505-9855-bd59cf5412ee] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.017629244s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0830 21:54:54.279882  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:55:43.072816  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:56:16.200156  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-855931 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.739269826s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-855931 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.023646598s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons disable ingress-dns --alsologtostderr -v=1: (1.842586603s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons disable ingress --alsologtostderr -v=1: (7.570351392s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-855931
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-855931:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94",
	        "Created": "2023-08-30T21:52:55.505612304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1018455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T21:52:55.833269518Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94/hostname",
	        "HostsPath": "/var/lib/docker/containers/44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94/hosts",
	        "LogPath": "/var/lib/docker/containers/44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94/44df7e1b0a94aa2208a5851e1542ef0bd4f3c0af92d12a70be614f1f80b7cf94-json.log",
	        "Name": "/ingress-addon-legacy-855931",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-855931:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-855931",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89dc7572517904e6195f3d0c8d48b73d325a75356ae4f85586c8c83a1e6ad223-init/diff:/var/lib/docker/overlay2/5a8abadbbe02000d4a1cbd31235f9b3bba474489fe1515f2d12f946a2d011f32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89dc7572517904e6195f3d0c8d48b73d325a75356ae4f85586c8c83a1e6ad223/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89dc7572517904e6195f3d0c8d48b73d325a75356ae4f85586c8c83a1e6ad223/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89dc7572517904e6195f3d0c8d48b73d325a75356ae4f85586c8c83a1e6ad223/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-855931",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-855931/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-855931",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-855931",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-855931",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4313c99d4a53e04d5990c7d82f37bc42ae3d82a5380f6807e799820335ec1ed6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34027"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34024"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34026"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34025"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4313c99d4a53",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-855931": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "44df7e1b0a94",
	                        "ingress-addon-legacy-855931"
	                    ],
	                    "NetworkID": "81d0785e7926b077e50d9499fa6e60e9d7dad2ea9dfa535cf85b552d6c7d881b",
	                    "EndpointID": "2437586dea7f7feb1f5fabe8f02a61ecac9e0b6f76069f3a3832fd34e1bc74bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-855931 -n ingress-addon-legacy-855931
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-855931 logs -n 25: (1.418461852s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-540436 ssh findmnt        | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-540436 ssh findmnt        | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-540436                 | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| start          | -p functional-540436                 | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-540436                 | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-540436                 | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | -p functional-540436                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-540436 ssh pgrep          | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-540436 image build -t     | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | localhost/my-image:functional-540436 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-540436 image ls           | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	| image          | functional-540436                    | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-540436                 | functional-540436           | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:52 UTC |
	| start          | -p ingress-addon-legacy-855931       | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:52 UTC | 30 Aug 23 21:54 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-855931          | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:54 UTC | 30 Aug 23 21:54 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-855931          | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:54 UTC | 30 Aug 23 21:54 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-855931          | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:54 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-855931 ip       | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:56 UTC | 30 Aug 23 21:56 UTC |
	| addons         | ingress-addon-legacy-855931          | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC | 30 Aug 23 21:57 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-855931          | ingress-addon-legacy-855931 | jenkins | v1.31.2 | 30 Aug 23 21:57 UTC | 30 Aug 23 21:57 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:52:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:52:37.267335 1017996 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:52:37.267592 1017996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:37.267619 1017996 out.go:309] Setting ErrFile to fd 2...
	I0830 21:52:37.267636 1017996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:37.267949 1017996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 21:52:37.268436 1017996 out.go:303] Setting JSON to false
	I0830 21:52:37.269450 1017996 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23692,"bootTime":1693408666,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:52:37.269549 1017996 start.go:138] virtualization:  
	I0830 21:52:37.272343 1017996 out.go:177] * [ingress-addon-legacy-855931] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:52:37.274482 1017996 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 21:52:37.276165 1017996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:52:37.274626 1017996 notify.go:220] Checking for updates...
	I0830 21:52:37.278043 1017996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:52:37.279853 1017996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:52:37.281528 1017996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 21:52:37.283385 1017996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:52:37.285249 1017996 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:52:37.311884 1017996 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:52:37.311987 1017996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:52:37.399363 1017996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 21:52:37.389739701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:52:37.399458 1017996 docker.go:294] overlay module found
	I0830 21:52:37.401347 1017996 out.go:177] * Using the docker driver based on user configuration
	I0830 21:52:37.403115 1017996 start.go:298] selected driver: docker
	I0830 21:52:37.403135 1017996 start.go:902] validating driver "docker" against <nil>
	I0830 21:52:37.403148 1017996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:52:37.403755 1017996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:52:37.474928 1017996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 21:52:37.464707778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:52:37.475103 1017996 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:52:37.475381 1017996 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 21:52:37.477027 1017996 out.go:177] * Using Docker driver with root privileges
	I0830 21:52:37.478444 1017996 cni.go:84] Creating CNI manager for ""
	I0830 21:52:37.478459 1017996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:52:37.478474 1017996 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 21:52:37.478487 1017996 start_flags.go:319] config:
	{Name:ingress-addon-legacy-855931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-855931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:52:37.480368 1017996 out.go:177] * Starting control plane node ingress-addon-legacy-855931 in cluster ingress-addon-legacy-855931
	I0830 21:52:37.482080 1017996 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 21:52:37.483759 1017996 out.go:177] * Pulling base image ...
	I0830 21:52:37.485282 1017996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:52:37.485363 1017996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 21:52:37.502516 1017996 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0830 21:52:37.502542 1017996 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0830 21:52:37.557221 1017996 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0830 21:52:37.557262 1017996 cache.go:57] Caching tarball of preloaded images
	I0830 21:52:37.557429 1017996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:52:37.559423 1017996 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0830 21:52:37.560993 1017996 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:52:37.689027 1017996 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0830 21:52:47.647258 1017996 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:52:47.648067 1017996 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:52:48.759455 1017996 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0830 21:52:48.759826 1017996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/config.json ...
	I0830 21:52:48.759861 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/config.json: {Name:mka1416175f92d62e474cc33da0f6b6d6f025cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:52:48.760048 1017996 cache.go:195] Successfully downloaded all kic artifacts
	I0830 21:52:48.760110 1017996 start.go:365] acquiring machines lock for ingress-addon-legacy-855931: {Name:mkfcb6ed41037793f0f59d54777ee9a290acaf3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 21:52:48.760168 1017996 start.go:369] acquired machines lock for "ingress-addon-legacy-855931" in 44.808µs
	I0830 21:52:48.760191 1017996 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-855931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-855931 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:52:48.760261 1017996 start.go:125] createHost starting for "" (driver="docker")
	I0830 21:52:48.762387 1017996 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0830 21:52:48.762605 1017996 start.go:159] libmachine.API.Create for "ingress-addon-legacy-855931" (driver="docker")
	I0830 21:52:48.762633 1017996 client.go:168] LocalClient.Create starting
	I0830 21:52:48.762719 1017996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem
	I0830 21:52:48.762757 1017996 main.go:141] libmachine: Decoding PEM data...
	I0830 21:52:48.762776 1017996 main.go:141] libmachine: Parsing certificate...
	I0830 21:52:48.762832 1017996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem
	I0830 21:52:48.762852 1017996 main.go:141] libmachine: Decoding PEM data...
	I0830 21:52:48.762865 1017996 main.go:141] libmachine: Parsing certificate...
	I0830 21:52:48.763226 1017996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-855931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 21:52:48.781097 1017996 cli_runner.go:211] docker network inspect ingress-addon-legacy-855931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 21:52:48.781208 1017996 network_create.go:281] running [docker network inspect ingress-addon-legacy-855931] to gather additional debugging logs...
	I0830 21:52:48.781229 1017996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-855931
	W0830 21:52:48.802080 1017996 cli_runner.go:211] docker network inspect ingress-addon-legacy-855931 returned with exit code 1
	I0830 21:52:48.802109 1017996 network_create.go:284] error running [docker network inspect ingress-addon-legacy-855931]: docker network inspect ingress-addon-legacy-855931: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-855931 not found
	I0830 21:52:48.802124 1017996 network_create.go:286] output of [docker network inspect ingress-addon-legacy-855931]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-855931 not found
	
	** /stderr **
	I0830 21:52:48.802191 1017996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 21:52:48.824001 1017996 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011849e0}
	I0830 21:52:48.824044 1017996 network_create.go:123] attempt to create docker network ingress-addon-legacy-855931 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0830 21:52:48.824112 1017996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-855931 ingress-addon-legacy-855931
	I0830 21:52:48.898858 1017996 network_create.go:107] docker network ingress-addon-legacy-855931 192.168.49.0/24 created
	I0830 21:52:48.898891 1017996 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-855931" container
	I0830 21:52:48.898969 1017996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 21:52:48.916456 1017996 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-855931 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-855931 --label created_by.minikube.sigs.k8s.io=true
	I0830 21:52:48.935315 1017996 oci.go:103] Successfully created a docker volume ingress-addon-legacy-855931
	I0830 21:52:48.935413 1017996 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-855931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-855931 --entrypoint /usr/bin/test -v ingress-addon-legacy-855931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0830 21:52:50.509284 1017996 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-855931-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-855931 --entrypoint /usr/bin/test -v ingress-addon-legacy-855931:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib: (1.573818801s)
	I0830 21:52:50.509319 1017996 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-855931
	I0830 21:52:50.509347 1017996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:52:50.509367 1017996 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 21:52:50.509454 1017996 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-855931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 21:52:55.416502 1017996 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-855931:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.907000184s)
	I0830 21:52:55.416533 1017996 kic.go:199] duration metric: took 4.907163 seconds to extract preloaded images to volume
	W0830 21:52:55.416681 1017996 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 21:52:55.416791 1017996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 21:52:55.483376 1017996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-855931 --name ingress-addon-legacy-855931 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-855931 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-855931 --network ingress-addon-legacy-855931 --ip 192.168.49.2 --volume ingress-addon-legacy-855931:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0830 21:52:55.841574 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Running}}
	I0830 21:52:55.876027 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:52:55.908733 1017996 cli_runner.go:164] Run: docker exec ingress-addon-legacy-855931 stat /var/lib/dpkg/alternatives/iptables
	I0830 21:52:55.989991 1017996 oci.go:144] the created container "ingress-addon-legacy-855931" has a running status.
	I0830 21:52:55.990017 1017996 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa...
	I0830 21:52:56.274386 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0830 21:52:56.274477 1017996 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 21:52:56.298720 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:52:56.334685 1017996 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 21:52:56.334706 1017996 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-855931 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 21:52:56.437321 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:52:56.467559 1017996 machine.go:88] provisioning docker machine ...
	I0830 21:52:56.467588 1017996 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-855931"
	I0830 21:52:56.467654 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:52:56.491990 1017996 main.go:141] libmachine: Using SSH client type: native
	I0830 21:52:56.492450 1017996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I0830 21:52:56.492463 1017996 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-855931 && echo "ingress-addon-legacy-855931" | sudo tee /etc/hostname
	I0830 21:52:56.493041 1017996 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44794->127.0.0.1:34028: read: connection reset by peer
	I0830 21:52:59.648272 1017996 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-855931
	
	I0830 21:52:59.648378 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:52:59.666535 1017996 main.go:141] libmachine: Using SSH client type: native
	I0830 21:52:59.666968 1017996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I0830 21:52:59.666992 1017996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-855931' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-855931/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-855931' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 21:52:59.802288 1017996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 21:52:59.802317 1017996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 21:52:59.802337 1017996 ubuntu.go:177] setting up certificates
	I0830 21:52:59.802347 1017996 provision.go:83] configureAuth start
	I0830 21:52:59.802407 1017996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-855931
	I0830 21:52:59.820465 1017996 provision.go:138] copyHostCerts
	I0830 21:52:59.820509 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 21:52:59.820541 1017996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 21:52:59.820551 1017996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 21:52:59.820626 1017996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 21:52:59.820709 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 21:52:59.820731 1017996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 21:52:59.820739 1017996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 21:52:59.820768 1017996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 21:52:59.820811 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 21:52:59.820833 1017996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 21:52:59.820840 1017996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 21:52:59.820864 1017996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 21:52:59.820918 1017996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-855931 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-855931]
	I0830 21:53:00.135153 1017996 provision.go:172] copyRemoteCerts
	I0830 21:53:00.136856 1017996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 21:53:00.137038 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:00.174047 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:00.286710 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 21:53:00.286830 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 21:53:00.321835 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 21:53:00.321925 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0830 21:53:00.357398 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 21:53:00.357469 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 21:53:00.391723 1017996 provision.go:86] duration metric: configureAuth took 589.362119ms
	I0830 21:53:00.391750 1017996 ubuntu.go:193] setting minikube options for container-runtime
	I0830 21:53:00.391975 1017996 config.go:182] Loaded profile config "ingress-addon-legacy-855931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0830 21:53:00.392089 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:00.416626 1017996 main.go:141] libmachine: Using SSH client type: native
	I0830 21:53:00.417105 1017996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34028 <nil> <nil>}
	I0830 21:53:00.417154 1017996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 21:53:00.704751 1017996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 21:53:00.704772 1017996 machine.go:91] provisioned docker machine in 4.237194606s
	I0830 21:53:00.704782 1017996 client.go:171] LocalClient.Create took 11.942138405s
	I0830 21:53:00.704809 1017996 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-855931" took 11.942194651s
	I0830 21:53:00.704817 1017996 start.go:300] post-start starting for "ingress-addon-legacy-855931" (driver="docker")
	I0830 21:53:00.704826 1017996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 21:53:00.704892 1017996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 21:53:00.704961 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:00.728494 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:00.832484 1017996 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 21:53:00.837747 1017996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 21:53:00.837795 1017996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 21:53:00.837807 1017996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 21:53:00.837813 1017996 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 21:53:00.837823 1017996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 21:53:00.837887 1017996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 21:53:00.837970 1017996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 21:53:00.837981 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /etc/ssl/certs/9898252.pem
	I0830 21:53:00.838091 1017996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 21:53:00.849716 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 21:53:00.879619 1017996 start.go:303] post-start completed in 174.786425ms
	I0830 21:53:00.880018 1017996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-855931
	I0830 21:53:00.903051 1017996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/config.json ...
	I0830 21:53:00.903340 1017996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 21:53:00.903392 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:00.921403 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:01.019159 1017996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 21:53:01.025378 1017996 start.go:128] duration metric: createHost completed in 12.265100363s
	I0830 21:53:01.025402 1017996 start.go:83] releasing machines lock for "ingress-addon-legacy-855931", held for 12.2652244s
	I0830 21:53:01.025474 1017996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-855931
	I0830 21:53:01.043011 1017996 ssh_runner.go:195] Run: cat /version.json
	I0830 21:53:01.043026 1017996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 21:53:01.043061 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:01.043085 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:01.071016 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:01.072155 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:01.166369 1017996 ssh_runner.go:195] Run: systemctl --version
	I0830 21:53:01.302891 1017996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 21:53:01.451648 1017996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 21:53:01.458513 1017996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:53:01.486825 1017996 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 21:53:01.486932 1017996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 21:53:01.529682 1017996 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0830 21:53:01.529705 1017996 start.go:466] detecting cgroup driver to use...
	I0830 21:53:01.529761 1017996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 21:53:01.529827 1017996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 21:53:01.549630 1017996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 21:53:01.563520 1017996 docker.go:196] disabling cri-docker service (if available) ...
	I0830 21:53:01.563608 1017996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 21:53:01.580002 1017996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 21:53:01.596780 1017996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 21:53:01.706928 1017996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 21:53:01.802051 1017996 docker.go:212] disabling docker service ...
	I0830 21:53:01.802158 1017996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 21:53:01.828359 1017996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 21:53:01.842499 1017996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 21:53:01.933367 1017996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 21:53:02.035019 1017996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 21:53:02.048889 1017996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 21:53:02.069840 1017996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0830 21:53:02.069931 1017996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:53:02.082125 1017996 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 21:53:02.082196 1017996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:53:02.095259 1017996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:53:02.108352 1017996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 21:53:02.121050 1017996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 21:53:02.133211 1017996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 21:53:02.144886 1017996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 21:53:02.160787 1017996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 21:53:02.250794 1017996 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 21:53:02.378702 1017996 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 21:53:02.378797 1017996 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 21:53:02.384025 1017996 start.go:534] Will wait 60s for crictl version
	I0830 21:53:02.384115 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:02.388732 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 21:53:02.436961 1017996 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0830 21:53:02.437070 1017996 ssh_runner.go:195] Run: crio --version
	I0830 21:53:02.483252 1017996 ssh_runner.go:195] Run: crio --version
	I0830 21:53:02.537428 1017996 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0830 21:53:02.539632 1017996 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-855931 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 21:53:02.557662 1017996 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0830 21:53:02.562396 1017996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:53:02.576136 1017996 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0830 21:53:02.576204 1017996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:53:02.632283 1017996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0830 21:53:02.632376 1017996 ssh_runner.go:195] Run: which lz4
	I0830 21:53:02.637158 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0830 21:53:02.637263 1017996 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 21:53:02.641945 1017996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 21:53:02.641978 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0830 21:53:04.796113 1017996 crio.go:444] Took 2.158880 seconds to copy over tarball
	I0830 21:53:04.796194 1017996 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 21:53:07.484582 1017996 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.688343014s)
	I0830 21:53:07.484606 1017996 crio.go:451] Took 2.688470 seconds to extract the tarball
	I0830 21:53:07.484616 1017996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 21:53:07.709669 1017996 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 21:53:07.754317 1017996 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0830 21:53:07.754343 1017996 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 21:53:07.754380 1017996 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:53:07.754612 1017996 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:53:07.754705 1017996 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:53:07.754793 1017996 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:53:07.754866 1017996 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:53:07.754939 1017996 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0830 21:53:07.755007 1017996 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:53:07.755087 1017996 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0830 21:53:07.756918 1017996 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:53:07.756931 1017996 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0830 21:53:07.757059 1017996 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:53:07.757282 1017996 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:53:07.757309 1017996 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0830 21:53:07.757361 1017996 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:53:07.757411 1017996 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:53:07.757432 1017996 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W0830 21:53:08.177115 1017996 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.177369 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0830 21:53:08.220590 1017996 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.220819 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0830 21:53:08.227854 1017996 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.228260 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:53:08.233689 1017996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0830 21:53:08.233773 1017996 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:53:08.233841 1017996 ssh_runner.go:195] Run: which crictl
	W0830 21:53:08.239997 1017996 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.240268 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0830 21:53:08.246150 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0830 21:53:08.255171 1017996 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.255401 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0830 21:53:08.272654 1017996 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.272977 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0830 21:53:08.333351 1017996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0830 21:53:08.333392 1017996 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:53:08.333439 1017996 ssh_runner.go:195] Run: which crictl
	W0830 21:53:08.375461 1017996 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0830 21:53:08.375684 1017996 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:53:08.395814 1017996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0830 21:53:08.395930 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0830 21:53:08.395966 1017996 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:53:08.396082 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.428625 1017996 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0830 21:53:08.428712 1017996 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0830 21:53:08.428782 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.428908 1017996 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0830 21:53:08.428955 1017996 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0830 21:53:08.429002 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.449877 1017996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0830 21:53:08.449967 1017996 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:53:08.450036 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.477618 1017996 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0830 21:53:08.477785 1017996 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0830 21:53:08.477738 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0830 21:53:08.477863 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.585089 1017996 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0830 21:53:08.585170 1017996 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:53:08.585232 1017996 ssh_runner.go:195] Run: which crictl
	I0830 21:53:08.585301 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0830 21:53:08.585370 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0830 21:53:08.585473 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0830 21:53:08.585589 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 21:53:08.585626 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0830 21:53:08.585727 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0830 21:53:08.585792 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0830 21:53:08.705260 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0830 21:53:08.705343 1017996 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:53:08.705425 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0830 21:53:08.723398 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0830 21:53:08.723467 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0830 21:53:08.730753 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0830 21:53:08.787243 1017996 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 21:53:08.787356 1017996 cache_images.go:92] LoadImages completed in 1.032999345s
	W0830 21:53:08.787426 1017996 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0830 21:53:08.787497 1017996 ssh_runner.go:195] Run: crio config
	I0830 21:53:08.859893 1017996 cni.go:84] Creating CNI manager for ""
	I0830 21:53:08.859915 1017996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:53:08.859956 1017996 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 21:53:08.859978 1017996 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-855931 NodeName:ingress-addon-legacy-855931 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 21:53:08.860152 1017996 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-855931"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 21:53:08.860243 1017996 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-855931 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-855931 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 21:53:08.860324 1017996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0830 21:53:08.871226 1017996 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 21:53:08.871338 1017996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 21:53:08.882369 1017996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0830 21:53:08.904149 1017996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0830 21:53:08.925809 1017996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0830 21:53:08.948159 1017996 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0830 21:53:08.953019 1017996 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 21:53:08.967280 1017996 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931 for IP: 192.168.49.2
	I0830 21:53:08.967371 1017996 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1c893f087ee62e9f919bfa6a6de84891ee8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:08.967571 1017996 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key
	I0830 21:53:08.967617 1017996 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key
	I0830 21:53:08.967666 1017996 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key
	I0830 21:53:08.967688 1017996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt with IP's: []
	I0830 21:53:09.941935 1017996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt ...
	I0830 21:53:09.941967 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: {Name:mk1a30c113e0d62592ec488ff1f54105e9667443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:09.942171 1017996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key ...
	I0830 21:53:09.942188 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key: {Name:mk375ff5d6d14a622bb03042fea7b304b0a4d299 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:09.942283 1017996 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key.dd3b5fb2
	I0830 21:53:09.942300 1017996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 21:53:10.302126 1017996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt.dd3b5fb2 ...
	I0830 21:53:10.302157 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt.dd3b5fb2: {Name:mk1b838af4bf1c3d8f90e035a9784ac2d5bb41d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:10.302338 1017996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key.dd3b5fb2 ...
	I0830 21:53:10.302352 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key.dd3b5fb2: {Name:mk0e600af00c87f1489a775a2c2d5a7cce34ab46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:10.302437 1017996 certs.go:337] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt
	I0830 21:53:10.302515 1017996 certs.go:341] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key
	I0830 21:53:10.302572 1017996 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.key
	I0830 21:53:10.302588 1017996 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.crt with IP's: []
	I0830 21:53:11.011844 1017996 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.crt ...
	I0830 21:53:11.011877 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.crt: {Name:mk75dcfb8665e5061e80bc7370dca9bf3ca33760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:11.012085 1017996 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.key ...
	I0830 21:53:11.012095 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.key: {Name:mk5cb907003898809c98c588db3e6836d1a78ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:11.012176 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 21:53:11.012200 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 21:53:11.012213 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 21:53:11.012229 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 21:53:11.012243 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 21:53:11.012255 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 21:53:11.012273 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 21:53:11.012287 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 21:53:11.012354 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem (1338 bytes)
	W0830 21:53:11.012400 1017996 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825_empty.pem, impossibly tiny 0 bytes
	I0830 21:53:11.012412 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 21:53:11.012442 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem (1082 bytes)
	I0830 21:53:11.012471 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem (1123 bytes)
	I0830 21:53:11.012501 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem (1679 bytes)
	I0830 21:53:11.012556 1017996 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 21:53:11.012587 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:53:11.012603 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem -> /usr/share/ca-certificates/989825.pem
	I0830 21:53:11.012614 1017996 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /usr/share/ca-certificates/9898252.pem
	I0830 21:53:11.013325 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 21:53:11.047093 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 21:53:11.076182 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 21:53:11.107525 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 21:53:11.136786 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 21:53:11.165692 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 21:53:11.194776 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 21:53:11.223991 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 21:53:11.254037 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 21:53:11.283348 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem --> /usr/share/ca-certificates/989825.pem (1338 bytes)
	I0830 21:53:11.312727 1017996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /usr/share/ca-certificates/9898252.pem (1708 bytes)
	I0830 21:53:11.342598 1017996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 21:53:11.364793 1017996 ssh_runner.go:195] Run: openssl version
	I0830 21:53:11.372555 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 21:53:11.385280 1017996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:53:11.390553 1017996 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:53:11.390646 1017996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 21:53:11.400019 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 21:53:11.412987 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989825.pem && ln -fs /usr/share/ca-certificates/989825.pem /etc/ssl/certs/989825.pem"
	I0830 21:53:11.425291 1017996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989825.pem
	I0830 21:53:11.430336 1017996 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 21:53:11.430437 1017996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989825.pem
	I0830 21:53:11.439151 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/989825.pem /etc/ssl/certs/51391683.0"
	I0830 21:53:11.451574 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9898252.pem && ln -fs /usr/share/ca-certificates/9898252.pem /etc/ssl/certs/9898252.pem"
	I0830 21:53:11.463915 1017996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9898252.pem
	I0830 21:53:11.468786 1017996 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 21:53:11.468860 1017996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9898252.pem
	I0830 21:53:11.477749 1017996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9898252.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 21:53:11.489798 1017996 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 21:53:11.494328 1017996 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 21:53:11.494379 1017996 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-855931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-855931 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:53:11.494457 1017996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 21:53:11.494514 1017996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 21:53:11.537490 1017996 cri.go:89] found id: ""
	I0830 21:53:11.537566 1017996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 21:53:11.548677 1017996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 21:53:11.559911 1017996 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0830 21:53:11.559982 1017996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 21:53:11.571131 1017996 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 21:53:11.571203 1017996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0830 21:53:11.629487 1017996 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0830 21:53:11.630131 1017996 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 21:53:11.684762 1017996 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0830 21:53:11.684856 1017996 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1043-aws
	I0830 21:53:11.684897 1017996 kubeadm.go:322] OS: Linux
	I0830 21:53:11.684943 1017996 kubeadm.go:322] CGROUPS_CPU: enabled
	I0830 21:53:11.684993 1017996 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0830 21:53:11.685046 1017996 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0830 21:53:11.685096 1017996 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0830 21:53:11.685166 1017996 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0830 21:53:11.685218 1017996 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0830 21:53:11.777221 1017996 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 21:53:11.777374 1017996 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 21:53:11.777500 1017996 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 21:53:12.036159 1017996 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 21:53:12.038362 1017996 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 21:53:12.038446 1017996 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 21:53:12.137580 1017996 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 21:53:12.141344 1017996 out.go:204]   - Generating certificates and keys ...
	I0830 21:53:12.141484 1017996 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 21:53:12.141554 1017996 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 21:53:12.729258 1017996 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 21:53:13.469258 1017996 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 21:53:13.746983 1017996 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 21:53:14.217702 1017996 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 21:53:14.357095 1017996 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 21:53:14.357702 1017996 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-855931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 21:53:14.771002 1017996 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 21:53:14.771392 1017996 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-855931 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 21:53:15.068481 1017996 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 21:53:15.218069 1017996 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 21:53:16.570860 1017996 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 21:53:16.570986 1017996 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 21:53:16.904699 1017996 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 21:53:17.334134 1017996 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 21:53:18.107422 1017996 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 21:53:18.311318 1017996 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 21:53:18.312187 1017996 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 21:53:18.315166 1017996 out.go:204]   - Booting up control plane ...
	I0830 21:53:18.315274 1017996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 21:53:18.321534 1017996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 21:53:18.323743 1017996 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 21:53:18.327211 1017996 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 21:53:18.332149 1017996 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 21:53:30.337150 1017996 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002552 seconds
	I0830 21:53:30.337271 1017996 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 21:53:30.352174 1017996 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 21:53:30.875015 1017996 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 21:53:30.875171 1017996 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-855931 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 21:53:31.383637 1017996 kubeadm.go:322] [bootstrap-token] Using token: iht8dg.uc3kiztrqj7iz1h6
	I0830 21:53:31.385825 1017996 out.go:204]   - Configuring RBAC rules ...
	I0830 21:53:31.385944 1017996 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 21:53:31.390760 1017996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 21:53:31.399038 1017996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 21:53:31.401992 1017996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 21:53:31.404749 1017996 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 21:53:31.407401 1017996 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 21:53:31.422952 1017996 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 21:53:31.710248 1017996 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 21:53:31.826405 1017996 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 21:53:31.827855 1017996 kubeadm.go:322] 
	I0830 21:53:31.827927 1017996 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 21:53:31.827936 1017996 kubeadm.go:322] 
	I0830 21:53:31.828008 1017996 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 21:53:31.828017 1017996 kubeadm.go:322] 
	I0830 21:53:31.828041 1017996 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 21:53:31.828100 1017996 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 21:53:31.828150 1017996 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 21:53:31.828159 1017996 kubeadm.go:322] 
	I0830 21:53:31.828212 1017996 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 21:53:31.828287 1017996 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 21:53:31.828355 1017996 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 21:53:31.828363 1017996 kubeadm.go:322] 
	I0830 21:53:31.828443 1017996 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 21:53:31.828518 1017996 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 21:53:31.828526 1017996 kubeadm.go:322] 
	I0830 21:53:31.828605 1017996 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token iht8dg.uc3kiztrqj7iz1h6 \
	I0830 21:53:31.828708 1017996 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 \
	I0830 21:53:31.828734 1017996 kubeadm.go:322]     --control-plane 
	I0830 21:53:31.828741 1017996 kubeadm.go:322] 
	I0830 21:53:31.828822 1017996 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 21:53:31.828830 1017996 kubeadm.go:322] 
	I0830 21:53:31.828908 1017996 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token iht8dg.uc3kiztrqj7iz1h6 \
	I0830 21:53:31.829010 1017996 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 
	I0830 21:53:31.831810 1017996 kubeadm.go:322] W0830 21:53:11.628569    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0830 21:53:31.832025 1017996 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 21:53:31.832129 1017996 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 21:53:31.832252 1017996 kubeadm.go:322] W0830 21:53:18.320387    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 21:53:31.832374 1017996 kubeadm.go:322] W0830 21:53:18.323500    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 21:53:31.832390 1017996 cni.go:84] Creating CNI manager for ""
	I0830 21:53:31.832398 1017996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:53:31.839383 1017996 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 21:53:31.841407 1017996 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 21:53:31.848418 1017996 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0830 21:53:31.848442 1017996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 21:53:31.876182 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 21:53:32.348054 1017996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 21:53:32.348182 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:32.348254 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588 minikube.k8s.io/name=ingress-addon-legacy-855931 minikube.k8s.io/updated_at=2023_08_30T21_53_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:32.501679 1017996 ops.go:34] apiserver oom_adj: -16
	I0830 21:53:32.501763 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:32.597245 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:33.194337 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:33.694364 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:34.194903 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:34.695137 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:35.194296 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:35.695069 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:36.195098 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:36.694687 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:37.194787 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:37.694816 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:38.194308 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:38.695140 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:39.195221 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:39.694320 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:40.194313 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:40.694804 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:41.194914 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:41.694405 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:42.195195 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:42.694704 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:43.194266 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:43.694745 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:44.194281 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:44.694320 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:45.195287 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:45.694294 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:46.194940 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:46.694247 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:47.194705 1017996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 21:53:47.293867 1017996 kubeadm.go:1081] duration metric: took 14.945735149s to wait for elevateKubeSystemPrivileges.
	I0830 21:53:47.293901 1017996 kubeadm.go:406] StartCluster complete in 35.799527003s
	I0830 21:53:47.293917 1017996 settings.go:142] acquiring lock: {Name:mkc3addaaa213f1dd8b8b58d94d3f946bbcb1099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:47.293975 1017996 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:53:47.294652 1017996 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/kubeconfig: {Name:mk735c90eaee551cc7c6cf5c5ad3cfbf98dfe457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:53:47.295427 1017996 kapi.go:59] client config for ingress-addon-legacy-855931: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:53:47.296866 1017996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 21:53:47.297545 1017996 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 21:53:47.297618 1017996 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-855931"
	I0830 21:53:47.297634 1017996 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-855931"
	I0830 21:53:47.297677 1017996 host.go:66] Checking if "ingress-addon-legacy-855931" exists ...
	I0830 21:53:47.297850 1017996 config.go:182] Loaded profile config "ingress-addon-legacy-855931": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0830 21:53:47.297903 1017996 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-855931"
	I0830 21:53:47.297926 1017996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-855931"
	I0830 21:53:47.298302 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:53:47.297911 1017996 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 21:53:47.299144 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:53:47.331607 1017996 kapi.go:59] client config for ingress-addon-legacy-855931: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:53:47.380727 1017996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 21:53:47.382992 1017996 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:53:47.383053 1017996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 21:53:47.383137 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:47.400688 1017996 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-855931"
	I0830 21:53:47.400732 1017996 host.go:66] Checking if "ingress-addon-legacy-855931" exists ...
	I0830 21:53:47.401256 1017996 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-855931 --format={{.State.Status}}
	I0830 21:53:47.439713 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:47.442555 1017996 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-855931" context rescaled to 1 replicas
	I0830 21:53:47.442592 1017996 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 21:53:47.444347 1017996 out.go:177] * Verifying Kubernetes components...
	I0830 21:53:47.446640 1017996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:53:47.461001 1017996 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 21:53:47.461025 1017996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 21:53:47.461090 1017996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-855931
	I0830 21:53:47.488000 1017996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34028 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/ingress-addon-legacy-855931/id_rsa Username:docker}
	I0830 21:53:47.624331 1017996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 21:53:47.625088 1017996 kapi.go:59] client config for ingress-addon-legacy-855931: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 21:53:47.625470 1017996 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-855931" to be "Ready" ...
	I0830 21:53:47.632651 1017996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 21:53:47.673462 1017996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 21:53:48.154876 1017996 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0830 21:53:48.234108 1017996 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 21:53:48.236266 1017996 addons.go:502] enable addons completed in 938.710208ms: enabled=[storage-provisioner default-storageclass]
	I0830 21:53:49.672220 1017996 node_ready.go:58] node "ingress-addon-legacy-855931" has status "Ready":"False"
	I0830 21:53:52.171776 1017996 node_ready.go:58] node "ingress-addon-legacy-855931" has status "Ready":"False"
	I0830 21:53:54.671473 1017996 node_ready.go:58] node "ingress-addon-legacy-855931" has status "Ready":"False"
	I0830 21:53:55.671485 1017996 node_ready.go:49] node "ingress-addon-legacy-855931" has status "Ready":"True"
	I0830 21:53:55.671515 1017996 node_ready.go:38] duration metric: took 8.045893126s waiting for node "ingress-addon-legacy-855931" to be "Ready" ...
	I0830 21:53:55.671527 1017996 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:53:55.679612 1017996 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-59cjz" in "kube-system" namespace to be "Ready" ...
	I0830 21:53:57.687785 1017996 pod_ready.go:102] pod "coredns-66bff467f8-59cjz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 21:53:47 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0830 21:53:59.692563 1017996 pod_ready.go:102] pod "coredns-66bff467f8-59cjz" in "kube-system" namespace has status "Ready":"False"
	I0830 21:54:02.190584 1017996 pod_ready.go:102] pod "coredns-66bff467f8-59cjz" in "kube-system" namespace has status "Ready":"False"
	I0830 21:54:04.191297 1017996 pod_ready.go:102] pod "coredns-66bff467f8-59cjz" in "kube-system" namespace has status "Ready":"False"
	I0830 21:54:04.690208 1017996 pod_ready.go:92] pod "coredns-66bff467f8-59cjz" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:04.690239 1017996 pod_ready.go:81] duration metric: took 9.010593249s waiting for pod "coredns-66bff467f8-59cjz" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.690252 1017996 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.694995 1017996 pod_ready.go:92] pod "etcd-ingress-addon-legacy-855931" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:04.695019 1017996 pod_ready.go:81] duration metric: took 4.758649ms waiting for pod "etcd-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.695034 1017996 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.699720 1017996 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-855931" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:04.699746 1017996 pod_ready.go:81] duration metric: took 4.704872ms waiting for pod "kube-apiserver-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.699771 1017996 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.704780 1017996 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-855931" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:04.704805 1017996 pod_ready.go:81] duration metric: took 5.001856ms waiting for pod "kube-controller-manager-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.704816 1017996 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7cxwp" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.709571 1017996 pod_ready.go:92] pod "kube-proxy-7cxwp" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:04.709595 1017996 pod_ready.go:81] duration metric: took 4.772351ms waiting for pod "kube-proxy-7cxwp" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.709607 1017996 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:04.886096 1017996 request.go:629] Waited for 176.350233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-855931
	I0830 21:54:05.085160 1017996 request.go:629] Waited for 196.275773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-855931
	I0830 21:54:05.088267 1017996 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-855931" in "kube-system" namespace has status "Ready":"True"
	I0830 21:54:05.088291 1017996 pod_ready.go:81] duration metric: took 378.67655ms waiting for pod "kube-scheduler-ingress-addon-legacy-855931" in "kube-system" namespace to be "Ready" ...
	I0830 21:54:05.088303 1017996 pod_ready.go:38] duration metric: took 9.416760196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 21:54:05.088340 1017996 api_server.go:52] waiting for apiserver process to appear ...
	I0830 21:54:05.088415 1017996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 21:54:05.106146 1017996 api_server.go:72] duration metric: took 17.663524077s to wait for apiserver process to appear ...
	I0830 21:54:05.106173 1017996 api_server.go:88] waiting for apiserver healthz status ...
	I0830 21:54:05.106201 1017996 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0830 21:54:05.116177 1017996 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0830 21:54:05.117102 1017996 api_server.go:141] control plane version: v1.18.20
	I0830 21:54:05.117212 1017996 api_server.go:131] duration metric: took 11.03079ms to wait for apiserver health ...
	I0830 21:54:05.117223 1017996 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 21:54:05.285741 1017996 request.go:629] Waited for 168.349191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0830 21:54:05.306591 1017996 system_pods.go:59] 8 kube-system pods found
	I0830 21:54:05.306631 1017996 system_pods.go:61] "coredns-66bff467f8-59cjz" [bba70a18-316c-49e4-aa5b-8ff946380c33] Running
	I0830 21:54:05.306638 1017996 system_pods.go:61] "etcd-ingress-addon-legacy-855931" [40df5788-ab76-4a45-85a7-0bf7b63188a1] Running
	I0830 21:54:05.306644 1017996 system_pods.go:61] "kindnet-5q8cw" [7268ae54-6f29-4f6d-abe5-d3ffc3b23bdc] Running
	I0830 21:54:05.306674 1017996 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-855931" [cb3276ee-1f92-4755-95a1-ed4d2c24c5f0] Running
	I0830 21:54:05.306686 1017996 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-855931" [8564f0c8-ef7b-436d-ac75-763c4050b607] Running
	I0830 21:54:05.306691 1017996 system_pods.go:61] "kube-proxy-7cxwp" [6d3f85c2-91a4-41fd-8034-7dd65c3a0dab] Running
	I0830 21:54:05.306697 1017996 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-855931" [70c3edd2-496a-49c7-8eee-eb39172319ab] Running
	I0830 21:54:05.306701 1017996 system_pods.go:61] "storage-provisioner" [cc2a7c03-81e5-4939-ae24-d219cc5f690a] Running
	I0830 21:54:05.306711 1017996 system_pods.go:74] duration metric: took 189.45069ms to wait for pod list to return data ...
	I0830 21:54:05.306720 1017996 default_sa.go:34] waiting for default service account to be created ...
	I0830 21:54:05.485027 1017996 request.go:629] Waited for 178.190451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0830 21:54:05.487492 1017996 default_sa.go:45] found service account: "default"
	I0830 21:54:05.487519 1017996 default_sa.go:55] duration metric: took 180.793165ms for default service account to be created ...
	I0830 21:54:05.487530 1017996 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 21:54:05.685960 1017996 request.go:629] Waited for 198.363294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0830 21:54:05.691994 1017996 system_pods.go:86] 8 kube-system pods found
	I0830 21:54:05.692026 1017996 system_pods.go:89] "coredns-66bff467f8-59cjz" [bba70a18-316c-49e4-aa5b-8ff946380c33] Running
	I0830 21:54:05.692037 1017996 system_pods.go:89] "etcd-ingress-addon-legacy-855931" [40df5788-ab76-4a45-85a7-0bf7b63188a1] Running
	I0830 21:54:05.692042 1017996 system_pods.go:89] "kindnet-5q8cw" [7268ae54-6f29-4f6d-abe5-d3ffc3b23bdc] Running
	I0830 21:54:05.692048 1017996 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-855931" [cb3276ee-1f92-4755-95a1-ed4d2c24c5f0] Running
	I0830 21:54:05.692054 1017996 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-855931" [8564f0c8-ef7b-436d-ac75-763c4050b607] Running
	I0830 21:54:05.692059 1017996 system_pods.go:89] "kube-proxy-7cxwp" [6d3f85c2-91a4-41fd-8034-7dd65c3a0dab] Running
	I0830 21:54:05.692064 1017996 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-855931" [70c3edd2-496a-49c7-8eee-eb39172319ab] Running
	I0830 21:54:05.692070 1017996 system_pods.go:89] "storage-provisioner" [cc2a7c03-81e5-4939-ae24-d219cc5f690a] Running
	I0830 21:54:05.692079 1017996 system_pods.go:126] duration metric: took 204.5412ms to wait for k8s-apps to be running ...
	I0830 21:54:05.692090 1017996 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 21:54:05.692150 1017996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 21:54:05.707088 1017996 system_svc.go:56] duration metric: took 14.987297ms WaitForService to wait for kubelet.
	I0830 21:54:05.707117 1017996 kubeadm.go:581] duration metric: took 18.264500321s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 21:54:05.707139 1017996 node_conditions.go:102] verifying NodePressure condition ...
	I0830 21:54:05.885553 1017996 request.go:629] Waited for 178.313348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0830 21:54:05.888219 1017996 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 21:54:05.888250 1017996 node_conditions.go:123] node cpu capacity is 2
	I0830 21:54:05.888263 1017996 node_conditions.go:105] duration metric: took 181.118416ms to run NodePressure ...
	I0830 21:54:05.888275 1017996 start.go:228] waiting for startup goroutines ...
	I0830 21:54:05.888282 1017996 start.go:233] waiting for cluster config update ...
	I0830 21:54:05.888292 1017996 start.go:242] writing updated cluster config ...
	I0830 21:54:05.888574 1017996 ssh_runner.go:195] Run: rm -f paused
	I0830 21:54:05.948738 1017996 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0830 21:54:05.951154 1017996 out.go:177] 
	W0830 21:54:05.953089 1017996 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0830 21:54:05.954823 1017996 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0830 21:54:05.956694 1017996 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-855931" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.133549325Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-b46lq/hello-world-app" id=2b55d40f-cb6f-479e-9015-eb0c0eec834e name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.133637719Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.221544765Z" level=info msg="Created container 51d39c53d3f7f980e593dd7b79092652e1f45ead19ccdac59fb12a49abbf185e: default/hello-world-app-5f5d8b66bb-b46lq/hello-world-app" id=2b55d40f-cb6f-479e-9015-eb0c0eec834e name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.222459160Z" level=info msg="Starting container: 51d39c53d3f7f980e593dd7b79092652e1f45ead19ccdac59fb12a49abbf185e" id=b1e756af-07a4-4fc0-bd2b-5fbf45a29480 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 30 21:57:07 ingress-addon-legacy-855931 conmon[3532]: conmon 51d39c53d3f7f980e593 <ninfo>: container 3543 exited with status 1
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.238428578Z" level=info msg="Started container" PID=3543 containerID=51d39c53d3f7f980e593dd7b79092652e1f45ead19ccdac59fb12a49abbf185e description=default/hello-world-app-5f5d8b66bb-b46lq/hello-world-app id=b1e756af-07a4-4fc0-bd2b-5fbf45a29480 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=033c8938fd4a32d783415229650c049bdd1a082205149fa498f0008520067677
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.621486126Z" level=info msg="Removing container: 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda" id=dd50d077-3155-47b4-8d6b-fe8c2ce0d5ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 30 21:57:07 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:07.649787699Z" level=info msg="Removed container 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda: default/hello-world-app-5f5d8b66bb-b46lq/hello-world-app" id=dd50d077-3155-47b4-8d6b-fe8c2ce0d5ef name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 30 21:57:08 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:08.582410568Z" level=info msg="Stopping container: 6212db72e00213cbb3a2bc98ac59be4ada0d1d5e4201d9aef82a8e3f5a09d13d (timeout: 2s)" id=690dd789-3677-44fd-8a91-ac98c4d1b3f8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 30 21:57:08 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:08.602960543Z" level=info msg="Stopping container: 6212db72e00213cbb3a2bc98ac59be4ada0d1d5e4201d9aef82a8e3f5a09d13d (timeout: 2s)" id=6d58c773-4051-49ae-a3a5-b58d8a949675 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.602448546Z" level=warning msg="Stopping container 6212db72e00213cbb3a2bc98ac59be4ada0d1d5e4201d9aef82a8e3f5a09d13d with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=690dd789-3677-44fd-8a91-ac98c4d1b3f8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 30 21:57:10 ingress-addon-legacy-855931 conmon[2706]: conmon 6212db72e00213cbb3a2 <ninfo>: container 2717 exited with status 137
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.789382142Z" level=info msg="Stopped container 6212db72e00213cbb3a2bc98ac59be4ada0d1d5e4201d9aef82a8e3f5a09d13d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9qdbb/controller" id=6d58c773-4051-49ae-a3a5-b58d8a949675 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.790003040Z" level=info msg="Stopping pod sandbox: e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba" id=6c4a1448-55df-4f45-961b-1355ddfb2068 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.791641454Z" level=info msg="Stopped container 6212db72e00213cbb3a2bc98ac59be4ada0d1d5e4201d9aef82a8e3f5a09d13d: ingress-nginx/ingress-nginx-controller-7fcf777cb7-9qdbb/controller" id=690dd789-3677-44fd-8a91-ac98c4d1b3f8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.792147612Z" level=info msg="Stopping pod sandbox: e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba" id=82e708a2-d6ed-44a4-998a-713af592ef26 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.793464197Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-XWD6KY47M6W7H2GS - [0:0]\n:KUBE-HP-CN2PXIMN4NW53RUD - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-CN2PXIMN4NW53RUD\n-X KUBE-HP-XWD6KY47M6W7H2GS\nCOMMIT\n"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.794947494Z" level=info msg="Closing host port tcp:80"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.794995034Z" level=info msg="Closing host port tcp:443"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.796197815Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.796223808Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.796374011Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-9qdbb Namespace:ingress-nginx ID:e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba UID:7acf5f48-8f0b-45c0-81e9-d6f871295688 NetNS:/var/run/netns/8fb267ee-c3bb-45a6-98db-d7a0058c3a4a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.796553941Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-9qdbb from CNI network \"kindnet\" (type=ptp)"
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.826677420Z" level=info msg="Stopped pod sandbox: e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba" id=6c4a1448-55df-4f45-961b-1355ddfb2068 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 30 21:57:10 ingress-addon-legacy-855931 crio[896]: time="2023-08-30 21:57:10.826798068Z" level=info msg="Stopped pod sandbox (already stopped): e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba" id=82e708a2-d6ed-44a4-998a-713af592ef26 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51d39c53d3f7f       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   9 seconds ago       Exited              hello-world-app           2                   033c8938fd4a3       hello-world-app-5f5d8b66bb-b46lq
	5d4284fd2c397       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   127c44a354f6d       nginx
	6212db72e0021       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   e76a7dee54deb       ingress-nginx-controller-7fcf777cb7-9qdbb
	2ff20d471e627       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   139cf6eecabc7       ingress-nginx-admission-patch-px8h7
	d027d64659d76       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   1b9a361f7b674       ingress-nginx-admission-create-f4qrp
	54d82ad52c7aa       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   44723224fd4c1       storage-provisioner
	4100edc6bb417       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   d17e58d8730d1       coredns-66bff467f8-59cjz
	6547cfbda2c25       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   4da79490260c3       kindnet-5q8cw
	df33f3140611b       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   473c4c850e807       kube-proxy-7cxwp
	6bdf159ea6eb0       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   c41ff78394d7c       kube-scheduler-ingress-addon-legacy-855931
	7a30c384d5a3f       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   725c64e7716b6       kube-controller-manager-ingress-addon-legacy-855931
	6971061db4f4c       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   230d4f1e90f9f       etcd-ingress-addon-legacy-855931
	3fe988cdd9668       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   8f93adc9f99ab       kube-apiserver-ingress-addon-legacy-855931
	
	* 
	* ==> coredns [4100edc6bb4173e54367b8eafe180a84c4195cbae7921b309021c0d1c523d7ab] <==
	* [INFO] 10.244.0.5:59458 - 53021 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036964s
	[INFO] 10.244.0.5:59458 - 16830 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002443764s
	[INFO] 10.244.0.5:53211 - 38207 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006596776s
	[INFO] 10.244.0.5:53211 - 4635 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001668749s
	[INFO] 10.244.0.5:59458 - 16712 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001927826s
	[INFO] 10.244.0.5:53211 - 48883 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000157161s
	[INFO] 10.244.0.5:59458 - 27776 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048386s
	[INFO] 10.244.0.5:36727 - 16896 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099134s
	[INFO] 10.244.0.5:34935 - 9734 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000050625s
	[INFO] 10.244.0.5:36727 - 12493 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047163s
	[INFO] 10.244.0.5:34935 - 44393 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053022s
	[INFO] 10.244.0.5:34935 - 19558 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040172s
	[INFO] 10.244.0.5:36727 - 16434 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000071089s
	[INFO] 10.244.0.5:34935 - 61554 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083635s
	[INFO] 10.244.0.5:34935 - 27128 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063393s
	[INFO] 10.244.0.5:36727 - 30247 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043684s
	[INFO] 10.244.0.5:34935 - 32020 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000151196s
	[INFO] 10.244.0.5:36727 - 1917 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00023328s
	[INFO] 10.244.0.5:36727 - 32863 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041682s
	[INFO] 10.244.0.5:34935 - 46965 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001312597s
	[INFO] 10.244.0.5:36727 - 2503 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001056245s
	[INFO] 10.244.0.5:34935 - 56967 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001200589s
	[INFO] 10.244.0.5:34935 - 22477 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051446s
	[INFO] 10.244.0.5:36727 - 28209 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001090189s
	[INFO] 10.244.0.5:36727 - 39275 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036078s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-855931
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-855931
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
	                    minikube.k8s.io/name=ingress-addon-legacy-855931
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T21_53_32_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 21:53:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-855931
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 21:57:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 21:57:05 +0000   Wed, 30 Aug 2023 21:53:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 21:57:05 +0000   Wed, 30 Aug 2023 21:53:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 21:57:05 +0000   Wed, 30 Aug 2023 21:53:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 21:57:05 +0000   Wed, 30 Aug 2023 21:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-855931
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a44e59ed2074b359653a76ee3cdee31
	  System UUID:                f1d65bf1-9560-424a-9217-8aacf9064705
	  Boot ID:                    98673563-8173-4281-afb4-eac1dfafdc23
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-b46lq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-59cjz                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m29s
	  kube-system                 etcd-ingress-addon-legacy-855931                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kindnet-5q8cw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-855931             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-855931    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-proxy-7cxwp                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-855931             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m55s (x4 over 3m55s)  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x5 over 3m55s)  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x4 over 3m55s)  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s                  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m41s                  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m41s                  kubelet     Node ingress-addon-legacy-855931 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m28s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m21s                  kubelet     Node ingress-addon-legacy-855931 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001068] FS-Cache: O-key=[8] 'a53f5c0100000000'
	[  +0.000743] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=0000000052a3ffac
	[  +0.001181] FS-Cache: N-key=[8] 'a53f5c0100000000'
	[  +0.003620] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000cfc10e18
	[  +0.001078] FS-Cache: O-key=[8] 'a53f5c0100000000'
	[  +0.000892] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000ec866464
	[  +0.001154] FS-Cache: N-key=[8] 'a53f5c0100000000'
	[  +3.285800] FS-Cache: Duplicate cookie detected
	[  +0.000913] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001105] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000185770a2
	[  +0.001225] FS-Cache: O-key=[8] 'a43f5c0100000000'
	[  +0.000833] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001080] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=000000006d053276
	[  +0.001194] FS-Cache: N-key=[8] 'a43f5c0100000000'
	[  +0.414572] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=000000001a64e3e4
	[  +0.001029] FS-Cache: O-key=[8] 'aa3f5c0100000000'
	[  +0.000731] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000a41d18fb
	[  +0.001092] FS-Cache: N-key=[8] 'aa3f5c0100000000'
	
	* 
	* ==> etcd [6971061db4f4c3d08411e1818fd308e31613d1df8f326cbdab973aee7cf88ec4] <==
	* raft2023/08/30 21:53:22 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/30 21:53:22 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/30 21:53:22 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/30 21:53:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-30 21:53:22.450251 W | auth: simple token is not cryptographically signed
	2023-08-30 21:53:23.165256 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-30 21:53:23.197256 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/30 21:53:23 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-30 21:53:23.249411 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-30 21:53:23.333217 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	raft2023/08/30 21:53:23 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/30 21:53:23 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/30 21:53:23 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/30 21:53:23 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/30 21:53:23 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-30 21:53:23.513189 I | embed: listening for peers on 192.168.49.2:2380
	2023-08-30 21:53:23.579729 I | etcdserver: published {Name:ingress-addon-legacy-855931 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-30 21:53:23.589804 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-30 21:53:23.645135 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-30 21:53:23.645318 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-30 21:53:23.683220 I | embed: ready to serve client requests
	2023-08-30 21:53:23.885167 I | embed: ready to serve client requests
	2023-08-30 21:53:23.886471 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-30 21:53:24.125338 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-30 21:53:24.537396 I | embed: listening for metrics on http://127.0.0.1:2381
	
	* 
	* ==> kernel <==
	*  21:57:16 up  6:39,  0 users,  load average: 0.63, 1.10, 1.55
	Linux ingress-addon-legacy-855931 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6547cfbda2c25a24fde2fbdff20dfdb27a0423a5c75d86b8941dfb47f091ada1] <==
	* I0830 21:55:11.444283       1 main.go:227] handling current node
	I0830 21:55:21.451531       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:55:21.451561       1 main.go:227] handling current node
	I0830 21:55:31.456517       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:55:31.456547       1 main.go:227] handling current node
	I0830 21:55:41.460127       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:55:41.460170       1 main.go:227] handling current node
	I0830 21:55:51.463811       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:55:51.463842       1 main.go:227] handling current node
	I0830 21:56:01.474660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:01.474691       1 main.go:227] handling current node
	I0830 21:56:11.484208       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:11.484239       1 main.go:227] handling current node
	I0830 21:56:21.496375       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:21.496407       1 main.go:227] handling current node
	I0830 21:56:31.501045       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:31.501081       1 main.go:227] handling current node
	I0830 21:56:41.505321       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:41.505349       1 main.go:227] handling current node
	I0830 21:56:51.508788       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:56:51.508820       1 main.go:227] handling current node
	I0830 21:57:01.515635       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:57:01.515664       1 main.go:227] handling current node
	I0830 21:57:11.519016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0830 21:57:11.519045       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3fe988cdd96689c05d91f3236f0d6930220d5d3caa3c5c600c2f6611036fc778] <==
	* I0830 21:53:28.743612       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0830 21:53:28.857586       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0830 21:53:28.946833       1 cache.go:39] Caches are synced for autoregister controller
	I0830 21:53:28.947201       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 21:53:28.947232       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 21:53:28.947265       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0830 21:53:28.947809       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0830 21:53:29.733182       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0830 21:53:29.733337       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0830 21:53:29.740846       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0830 21:53:29.745004       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0830 21:53:29.745100       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0830 21:53:30.241238       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 21:53:30.280707       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0830 21:53:30.381606       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0830 21:53:30.383011       1 controller.go:609] quota admission added evaluator for: endpoints
	I0830 21:53:30.391345       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 21:53:31.128387       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0830 21:53:31.670737       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0830 21:53:31.795251       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0830 21:53:35.115617       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 21:53:46.949612       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0830 21:53:47.068458       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0830 21:54:06.848146       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0830 21:54:32.246469       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [7a30c384d5a3fdfc54f6090e4332b4fccfd5a3f1a2b0ec369cc784734eb56f21] <==
	* I0830 21:53:47.325252       1 shared_informer.go:230] Caches are synced for PV protection 
	I0830 21:53:47.424188       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3b4b4850-fa6f-460f-a998-b773d5cec4de", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0830 21:53:47.438431       1 shared_informer.go:230] Caches are synced for HPA 
	I0830 21:53:47.465334       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 21:53:47.465387       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0830 21:53:47.473081       1 shared_informer.go:230] Caches are synced for disruption 
	I0830 21:53:47.473113       1 disruption.go:339] Sending events to api server.
	I0830 21:53:47.474916       1 shared_informer.go:230] Caches are synced for expand 
	I0830 21:53:47.498340       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0830 21:53:47.514536       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0830 21:53:47.523604       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 21:53:47.530995       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 21:53:47.531036       1 shared_informer.go:230] Caches are synced for stateful set 
	I0830 21:53:47.531059       1 shared_informer.go:230] Caches are synced for attach detach 
	I0830 21:53:47.531986       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 21:53:47.592125       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"b7f5dade-3269-410b-a682-f1c4dd104173", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-zh9cf
	I0830 21:53:56.975621       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0830 21:54:06.793886       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f2de575f-d464-42ab-b9f2-73b7e24d5350", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0830 21:54:06.814958       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"8465377f-ae75-4b3f-b2eb-479bba239268", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9qdbb
	I0830 21:54:06.866596       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b140ed03-3a1c-4c71-88c9-423ea5be89ac", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-f4qrp
	I0830 21:54:06.922485       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"271d2b7b-e531-4033-a855-df5495f87adc", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-px8h7
	I0830 21:54:09.253888       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b140ed03-3a1c-4c71-88c9-423ea5be89ac", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 21:54:10.263108       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"271d2b7b-e531-4033-a855-df5495f87adc", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 21:56:50.440870       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"79de9838-e270-4c31-94a8-ab2807ab1c0a", APIVersion:"apps/v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0830 21:56:50.471939       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"122d3e22-22f9-400c-93c3-baf6e24fc5db", APIVersion:"apps/v1", ResourceVersion:"717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-b46lq
	
	* 
	* ==> kube-proxy [df33f3140611b26a28f8b4bc4c983e10b17442866298d6acb8b335a49392fdc0] <==
	* W0830 21:53:48.070528       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0830 21:53:48.090907       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0830 21:53:48.090957       1 server_others.go:186] Using iptables Proxier.
	I0830 21:53:48.091292       1 server.go:583] Version: v1.18.20
	I0830 21:53:48.095031       1 config.go:315] Starting service config controller
	I0830 21:53:48.095141       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0830 21:53:48.133387       1 config.go:133] Starting endpoints config controller
	I0830 21:53:48.133473       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0830 21:53:48.227874       1 shared_informer.go:230] Caches are synced for service config 
	I0830 21:53:48.233793       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [6bdf159ea6eb03366fa60483303f0e91c792869a8e3282a8966effa6243e5709] <==
	* I0830 21:53:28.933665       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0830 21:53:28.935628       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0830 21:53:28.937055       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:53:28.937084       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 21:53:28.945974       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0830 21:53:28.953562       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 21:53:28.953838       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 21:53:28.953923       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:53:28.954027       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 21:53:28.957472       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 21:53:28.957578       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 21:53:28.957642       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:53:28.957718       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:53:28.957785       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 21:53:28.957847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 21:53:28.957903       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:53:28.957960       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 21:53:29.771881       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 21:53:29.926627       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 21:53:29.929198       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 21:53:29.985418       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 21:53:30.012319       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 21:53:30.106347       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0830 21:53:32.845243       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0830 21:53:47.114412       1 factory.go:503] pod: kube-system/coredns-66bff467f8-zh9cf is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Aug 30 21:56:54 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:56:54.597173    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 02050d0ef8166d9952f35f9b645e2c92b2badb713978067b270cb041e240f015
	Aug 30 21:56:54 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:56:54.597415    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda
	Aug 30 21:56:54 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:54.597784    1610 pod_workers.go:191] Error syncing pod 8abca753-efa5-4ea0-9aea-e3af2fac3afe ("hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"
	Aug 30 21:56:55 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:56:55.599934    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda
	Aug 30 21:56:55 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:55.600182    1610 pod_workers.go:191] Error syncing pod 8abca753-efa5-4ea0-9aea-e3af2fac3afe ("hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"
	Aug 30 21:56:58 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:58.128947    1610 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 30 21:56:58 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:58.128998    1610 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 30 21:56:58 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:58.129056    1610 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 30 21:56:58 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:56:58.129091    1610 pod_workers.go:191] Error syncing pod 22fcdf4e-030e-4716-be9a-6451b74ce39a ("kube-ingress-dns-minikube_kube-system(22fcdf4e-030e-4716-be9a-6451b74ce39a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 30 21:57:06 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:06.486883    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-5vd48" (UniqueName: "kubernetes.io/secret/22fcdf4e-030e-4716-be9a-6451b74ce39a-minikube-ingress-dns-token-5vd48") pod "22fcdf4e-030e-4716-be9a-6451b74ce39a" (UID: "22fcdf4e-030e-4716-be9a-6451b74ce39a")
	Aug 30 21:57:06 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:06.491410    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22fcdf4e-030e-4716-be9a-6451b74ce39a-minikube-ingress-dns-token-5vd48" (OuterVolumeSpecName: "minikube-ingress-dns-token-5vd48") pod "22fcdf4e-030e-4716-be9a-6451b74ce39a" (UID: "22fcdf4e-030e-4716-be9a-6451b74ce39a"). InnerVolumeSpecName "minikube-ingress-dns-token-5vd48". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:57:06 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:06.587252    1610 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-5vd48" (UniqueName: "kubernetes.io/secret/22fcdf4e-030e-4716-be9a-6451b74ce39a-minikube-ingress-dns-token-5vd48") on node "ingress-addon-legacy-855931" DevicePath ""
	Aug 30 21:57:07 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:07.128128    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda
	Aug 30 21:57:07 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:07.618830    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7b1b83bc800648027bf2162cb1b11521a2a7f1c4b6936784b21fb93088775eda
	Aug 30 21:57:07 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:07.619648    1610 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51d39c53d3f7f980e593dd7b79092652e1f45ead19ccdac59fb12a49abbf185e
	Aug 30 21:57:07 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:57:07.619914    1610 pod_workers.go:191] Error syncing pod 8abca753-efa5-4ea0-9aea-e3af2fac3afe ("hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-b46lq_default(8abca753-efa5-4ea0-9aea-e3af2fac3afe)"
	Aug 30 21:57:08 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:57:08.584708    1610 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9qdbb.17804800e5aa085b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9qdbb", UID:"7acf5f48-8f0b-45c0-81e9-d6f871295688", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-855931"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1340cad22ab005b, ext:216958088328, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1340cad22ab005b, ext:216958088328, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9qdbb.17804800e5aa085b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 21:57:08 ingress-addon-legacy-855931 kubelet[1610]: E0830 21:57:08.622685    1610 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-9qdbb.17804800e5aa085b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-9qdbb", UID:"7acf5f48-8f0b-45c0-81e9-d6f871295688", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-855931"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1340cad22ab005b, ext:216958088328, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1340cad23e70019, ext:216978797638, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-9qdbb.17804800e5aa085b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 21:57:11 ingress-addon-legacy-855931 kubelet[1610]: W0830 21:57:11.627462    1610 pod_container_deletor.go:77] Container "e76a7dee54deb0dc0198c919c6069898f772639a319631e9f1bc994976a09aba" not found in pod's containers
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.702710    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-wm5zd" (UniqueName: "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-ingress-nginx-token-wm5zd") pod "7acf5f48-8f0b-45c0-81e9-d6f871295688" (UID: "7acf5f48-8f0b-45c0-81e9-d6f871295688")
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.702763    1610 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-webhook-cert") pod "7acf5f48-8f0b-45c0-81e9-d6f871295688" (UID: "7acf5f48-8f0b-45c0-81e9-d6f871295688")
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.709146    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-ingress-nginx-token-wm5zd" (OuterVolumeSpecName: "ingress-nginx-token-wm5zd") pod "7acf5f48-8f0b-45c0-81e9-d6f871295688" (UID: "7acf5f48-8f0b-45c0-81e9-d6f871295688"). InnerVolumeSpecName "ingress-nginx-token-wm5zd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.709780    1610 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7acf5f48-8f0b-45c0-81e9-d6f871295688" (UID: "7acf5f48-8f0b-45c0-81e9-d6f871295688"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.803070    1610 reconciler.go:319] Volume detached for volume "ingress-nginx-token-wm5zd" (UniqueName: "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-ingress-nginx-token-wm5zd") on node "ingress-addon-legacy-855931" DevicePath ""
	Aug 30 21:57:12 ingress-addon-legacy-855931 kubelet[1610]: I0830 21:57:12.803116    1610 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7acf5f48-8f0b-45c0-81e9-d6f871295688-webhook-cert") on node "ingress-addon-legacy-855931" DevicePath ""
	
	* 
	* ==> storage-provisioner [54d82ad52c7aa1d68b89395345b9c112ff7bdb4738ec2231e721db243e565819] <==
	* I0830 21:54:02.886536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 21:54:02.902002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 21:54:02.902187       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 21:54:02.910024       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 21:54:02.910425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-855931_98451ef0-a96b-4593-96cf-cfe13273353c!
	I0830 21:54:02.910949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb711b55-889e-4fe9-a16f-de828795be1d", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-855931_98451ef0-a96b-4593-96cf-cfe13273353c became leader
	I0830 21:54:03.010782       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-855931_98451ef0-a96b-4593-96cf-cfe13273353c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-855931 -n ingress-addon-legacy-855931
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-855931 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- sh -c "ping -c 1 192.168.58.1": exit status 1 (238.45301ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-8gn7x): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (248.169291ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-rdfhb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-994875
helpers_test.go:235: (dbg) docker inspect multinode-994875:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544",
	        "Created": "2023-08-30T22:03:22.32909945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1054679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T22:03:22.644291077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c0704b3a4f8b9b9ec71e677be36506d49ffd7d56513ca0bdb5d12d8921195405",
	        "ResolvConfPath": "/var/lib/docker/containers/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/hosts",
	        "LogPath": "/var/lib/docker/containers/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544-json.log",
	        "Name": "/multinode-994875",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-994875:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-994875",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cc62d7994132016f994737f42ba6c467ec8d44a14f7f6db37c7d9663c58dc2c5-init/diff:/var/lib/docker/overlay2/5a8abadbbe02000d4a1cbd31235f9b3bba474489fe1515f2d12f946a2d011f32/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc62d7994132016f994737f42ba6c467ec8d44a14f7f6db37c7d9663c58dc2c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc62d7994132016f994737f42ba6c467ec8d44a14f7f6db37c7d9663c58dc2c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc62d7994132016f994737f42ba6c467ec8d44a14f7f6db37c7d9663c58dc2c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-994875",
	                "Source": "/var/lib/docker/volumes/multinode-994875/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-994875",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-994875",
	                "name.minikube.sigs.k8s.io": "multinode-994875",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f2e8de63eb4e919e74c76c1f8fd49d1b0b6becb8ba2c765cf1a680c25be52b40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f2e8de63eb4e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-994875": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9f440389aa1f",
	                        "multinode-994875"
	                    ],
	                    "NetworkID": "5b839e887fc7323610a811b165e3dc9fa4ec6feec143ee1645c087f269051186",
	                    "EndpointID": "471c7c3c5eed8af796b8dc8c8e30ea21f5433b13c0907f43cfb73a3dda9c44d4",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-994875 -n multinode-994875
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-994875 logs -n 25: (1.895011707s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-462667                           | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:02 UTC | 30 Aug 23 22:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-462667 ssh -- ls                    | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-460394                           | mount-start-1-460394 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-462667 ssh -- ls                    | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-462667                           | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	| start   | -p mount-start-2-462667                           | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	| ssh     | mount-start-2-462667 ssh -- ls                    | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-462667                           | mount-start-2-462667 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	| delete  | -p mount-start-1-460394                           | mount-start-1-460394 | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:03 UTC |
	| start   | -p multinode-994875                               | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:03 UTC | 30 Aug 23 22:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- apply -f                   | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- rollout                    | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- get pods -o                | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- get pods -o                | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8gn7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-rdfhb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8gn7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-rdfhb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8gn7x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-rdfhb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- get pods -o                | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-8gn7x                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | busybox-5bc68d56bd-8gn7x -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC | 30 Aug 23 22:05 UTC |
	|         | busybox-5bc68d56bd-rdfhb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-994875 -- exec                       | multinode-994875     | jenkins | v1.31.2 | 30 Aug 23 22:05 UTC |                     |
	|         | busybox-5bc68d56bd-rdfhb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:03:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:03:16.926573 1054224 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:03:16.926723 1054224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:03:16.926732 1054224 out.go:309] Setting ErrFile to fd 2...
	I0830 22:03:16.926738 1054224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:03:16.927018 1054224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:03:16.927428 1054224 out.go:303] Setting JSON to false
	I0830 22:03:16.928308 1054224 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24331,"bootTime":1693408666,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 22:03:16.928384 1054224 start.go:138] virtualization:  
	I0830 22:03:16.930776 1054224 out.go:177] * [multinode-994875] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:03:16.932905 1054224 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 22:03:16.934511 1054224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:03:16.933047 1054224 notify.go:220] Checking for updates...
	I0830 22:03:16.937704 1054224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:03:16.939070 1054224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 22:03:16.940853 1054224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:03:16.942663 1054224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:03:16.944627 1054224 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:03:16.972351 1054224 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:03:16.972451 1054224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:03:17.064049 1054224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 22:03:17.054083399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:03:17.064155 1054224 docker.go:294] overlay module found
	I0830 22:03:17.066484 1054224 out.go:177] * Using the docker driver based on user configuration
	I0830 22:03:17.068199 1054224 start.go:298] selected driver: docker
	I0830 22:03:17.068226 1054224 start.go:902] validating driver "docker" against <nil>
	I0830 22:03:17.068242 1054224 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:03:17.068843 1054224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:03:17.139857 1054224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 22:03:17.130333173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:03:17.140017 1054224 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:03:17.140242 1054224 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:03:17.142391 1054224 out.go:177] * Using Docker driver with root privileges
	I0830 22:03:17.144099 1054224 cni.go:84] Creating CNI manager for ""
	I0830 22:03:17.144121 1054224 cni.go:136] 0 nodes found, recommending kindnet
	I0830 22:03:17.144141 1054224 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 22:03:17.144155 1054224 start_flags.go:319] config:
	{Name:multinode-994875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:03:17.146545 1054224 out.go:177] * Starting control plane node multinode-994875 in cluster multinode-994875
	I0830 22:03:17.148254 1054224 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 22:03:17.150240 1054224 out.go:177] * Pulling base image ...
	I0830 22:03:17.152024 1054224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:03:17.152085 1054224 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0830 22:03:17.152092 1054224 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 22:03:17.152099 1054224 cache.go:57] Caching tarball of preloaded images
	I0830 22:03:17.152180 1054224 preload.go:174] Found /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0830 22:03:17.152197 1054224 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:03:17.152565 1054224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json ...
	I0830 22:03:17.152586 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json: {Name:mk7e8dbf415c6edc92e123c6f48b036a6d0d07c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:17.169827 1054224 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0830 22:03:17.169853 1054224 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0830 22:03:17.169874 1054224 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:03:17.169943 1054224 start.go:365] acquiring machines lock for multinode-994875: {Name:mk22e254e996269d37f916136bd950cb29bfcd84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:03:17.170063 1054224 start.go:369] acquired machines lock for "multinode-994875" in 96.812µs
	I0830 22:03:17.170095 1054224 start.go:93] Provisioning new machine with config: &{Name:multinode-994875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:03:17.170185 1054224 start.go:125] createHost starting for "" (driver="docker")
	I0830 22:03:17.172717 1054224 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0830 22:03:17.172971 1054224 start.go:159] libmachine.API.Create for "multinode-994875" (driver="docker")
	I0830 22:03:17.173001 1054224 client.go:168] LocalClient.Create starting
	I0830 22:03:17.173094 1054224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem
	I0830 22:03:17.173149 1054224 main.go:141] libmachine: Decoding PEM data...
	I0830 22:03:17.173167 1054224 main.go:141] libmachine: Parsing certificate...
	I0830 22:03:17.173228 1054224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem
	I0830 22:03:17.173250 1054224 main.go:141] libmachine: Decoding PEM data...
	I0830 22:03:17.173265 1054224 main.go:141] libmachine: Parsing certificate...
	I0830 22:03:17.173665 1054224 cli_runner.go:164] Run: docker network inspect multinode-994875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 22:03:17.194938 1054224 cli_runner.go:211] docker network inspect multinode-994875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 22:03:17.195076 1054224 network_create.go:281] running [docker network inspect multinode-994875] to gather additional debugging logs...
	I0830 22:03:17.195106 1054224 cli_runner.go:164] Run: docker network inspect multinode-994875
	W0830 22:03:17.212086 1054224 cli_runner.go:211] docker network inspect multinode-994875 returned with exit code 1
	I0830 22:03:17.212118 1054224 network_create.go:284] error running [docker network inspect multinode-994875]: docker network inspect multinode-994875: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-994875 not found
	I0830 22:03:17.212130 1054224 network_create.go:286] output of [docker network inspect multinode-994875]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-994875 not found
	
	** /stderr **
	I0830 22:03:17.212191 1054224 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:03:17.229992 1054224 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1011c5a7d786 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:38:8f:57:4b} reservation:<nil>}
	I0830 22:03:17.230372 1054224 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000d6dd20}
	I0830 22:03:17.230397 1054224 network_create.go:123] attempt to create docker network multinode-994875 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0830 22:03:17.230470 1054224 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-994875 multinode-994875
	I0830 22:03:17.303726 1054224 network_create.go:107] docker network multinode-994875 192.168.58.0/24 created
	I0830 22:03:17.303759 1054224 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-994875" container
	I0830 22:03:17.303835 1054224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 22:03:17.321993 1054224 cli_runner.go:164] Run: docker volume create multinode-994875 --label name.minikube.sigs.k8s.io=multinode-994875 --label created_by.minikube.sigs.k8s.io=true
	I0830 22:03:17.341483 1054224 oci.go:103] Successfully created a docker volume multinode-994875
	I0830 22:03:17.341574 1054224 cli_runner.go:164] Run: docker run --rm --name multinode-994875-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994875 --entrypoint /usr/bin/test -v multinode-994875:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0830 22:03:17.934918 1054224 oci.go:107] Successfully prepared a docker volume multinode-994875
	I0830 22:03:17.934958 1054224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:03:17.934984 1054224 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 22:03:17.935079 1054224 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994875:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 22:03:22.233593 1054224 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994875:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.29845457s)
	I0830 22:03:22.233630 1054224 kic.go:199] duration metric: took 4.298645 seconds to extract preloaded images to volume
	W0830 22:03:22.233825 1054224 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 22:03:22.233965 1054224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 22:03:22.307767 1054224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-994875 --name multinode-994875 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994875 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-994875 --network multinode-994875 --ip 192.168.58.2 --volume multinode-994875:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0830 22:03:22.653666 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Running}}
	I0830 22:03:22.680312 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:03:22.703310 1054224 cli_runner.go:164] Run: docker exec multinode-994875 stat /var/lib/dpkg/alternatives/iptables
	I0830 22:03:22.797618 1054224 oci.go:144] the created container "multinode-994875" has a running status.
	I0830 22:03:22.797644 1054224 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa...
	I0830 22:03:23.035021 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0830 22:03:23.035101 1054224 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 22:03:23.065350 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:03:23.096401 1054224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 22:03:23.096420 1054224 kic_runner.go:114] Args: [docker exec --privileged multinode-994875 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 22:03:23.212818 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:03:23.239509 1054224 machine.go:88] provisioning docker machine ...
	I0830 22:03:23.239553 1054224 ubuntu.go:169] provisioning hostname "multinode-994875"
	I0830 22:03:23.239617 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:23.272154 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:03:23.272611 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34088 <nil> <nil>}
	I0830 22:03:23.272624 1054224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994875 && echo "multinode-994875" | sudo tee /etc/hostname
	I0830 22:03:23.273267 1054224 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0830 22:03:26.437096 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994875
	
	I0830 22:03:26.437208 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:26.456151 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:03:26.456653 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34088 <nil> <nil>}
	I0830 22:03:26.456679 1054224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994875' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994875/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994875' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:03:26.598684 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:03:26.598710 1054224 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:03:26.598733 1054224 ubuntu.go:177] setting up certificates
	I0830 22:03:26.598748 1054224 provision.go:83] configureAuth start
	I0830 22:03:26.598816 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875
	I0830 22:03:26.616717 1054224 provision.go:138] copyHostCerts
	I0830 22:03:26.616754 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:03:26.616783 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:03:26.616789 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:03:26.616864 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:03:26.616942 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:03:26.616959 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:03:26.616963 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:03:26.616990 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:03:26.617032 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:03:26.617046 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:03:26.617052 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:03:26.617078 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:03:26.617120 1054224 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.multinode-994875 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-994875]
	I0830 22:03:27.025558 1054224 provision.go:172] copyRemoteCerts
	I0830 22:03:27.025634 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:03:27.025681 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.051596 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:03:27.155999 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 22:03:27.156070 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:03:27.185418 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 22:03:27.185520 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0830 22:03:27.214257 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 22:03:27.214363 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:03:27.243370 1054224 provision.go:86] duration metric: configureAuth took 644.608133ms
	I0830 22:03:27.243435 1054224 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:03:27.243658 1054224 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:03:27.243777 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.261015 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:03:27.261476 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34088 <nil> <nil>}
	I0830 22:03:27.261501 1054224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:03:27.511903 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:03:27.511926 1054224 machine.go:91] provisioned docker machine in 4.272385176s
	I0830 22:03:27.511937 1054224 client.go:171] LocalClient.Create took 10.338928033s
	I0830 22:03:27.511952 1054224 start.go:167] duration metric: libmachine.API.Create for "multinode-994875" took 10.338981473s
	I0830 22:03:27.511960 1054224 start.go:300] post-start starting for "multinode-994875" (driver="docker")
	I0830 22:03:27.511968 1054224 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:03:27.512042 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:03:27.512090 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.530200 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:03:27.632509 1054224 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:03:27.636767 1054224 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0830 22:03:27.636787 1054224 command_runner.go:130] > NAME="Ubuntu"
	I0830 22:03:27.636793 1054224 command_runner.go:130] > VERSION_ID="22.04"
	I0830 22:03:27.636800 1054224 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0830 22:03:27.636805 1054224 command_runner.go:130] > VERSION_CODENAME=jammy
	I0830 22:03:27.636812 1054224 command_runner.go:130] > ID=ubuntu
	I0830 22:03:27.636817 1054224 command_runner.go:130] > ID_LIKE=debian
	I0830 22:03:27.636822 1054224 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0830 22:03:27.636829 1054224 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0830 22:03:27.636836 1054224 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0830 22:03:27.636845 1054224 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0830 22:03:27.636853 1054224 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0830 22:03:27.636899 1054224 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:03:27.636926 1054224 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:03:27.636939 1054224 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:03:27.636948 1054224 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 22:03:27.636958 1054224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:03:27.637022 1054224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:03:27.637099 1054224 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:03:27.637109 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /etc/ssl/certs/9898252.pem
	I0830 22:03:27.637237 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:03:27.647823 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:03:27.677020 1054224 start.go:303] post-start completed in 165.04639ms
	I0830 22:03:27.677529 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875
	I0830 22:03:27.695403 1054224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json ...
	I0830 22:03:27.695702 1054224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:03:27.695759 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.713560 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:03:27.807599 1054224 command_runner.go:130] > 17%!
	(MISSING)I0830 22:03:27.808204 1054224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:03:27.814297 1054224 command_runner.go:130] > 161G
	I0830 22:03:27.814662 1054224 start.go:128] duration metric: createHost completed in 10.644449565s
	I0830 22:03:27.814678 1054224 start.go:83] releasing machines lock for "multinode-994875", held for 10.644601048s
	I0830 22:03:27.814756 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875
	I0830 22:03:27.833566 1054224 ssh_runner.go:195] Run: cat /version.json
	I0830 22:03:27.833630 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.833575 1054224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:03:27.833725 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:03:27.857645 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:03:27.860334 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:03:27.957558 1054224 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1693218425-17145", "minikube_version": "v1.31.2", "commit": "20676dbfdaf9085e354365adb7c56448fb3dd7be"}
	I0830 22:03:28.086591 1054224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 22:03:28.090157 1054224 ssh_runner.go:195] Run: systemctl --version
	I0830 22:03:28.095865 1054224 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0830 22:03:28.095952 1054224 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0830 22:03:28.096270 1054224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:03:28.244094 1054224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:03:28.249405 1054224 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0830 22:03:28.249432 1054224 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0830 22:03:28.249439 1054224 command_runner.go:130] > Device: 3ah/58d	Inode: 1301502     Links: 1
	I0830 22:03:28.249446 1054224 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:03:28.249475 1054224 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0830 22:03:28.249488 1054224 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0830 22:03:28.249495 1054224 command_runner.go:130] > Change: 2023-08-30 21:37:54.211838540 +0000
	I0830 22:03:28.249504 1054224 command_runner.go:130] >  Birth: 2023-08-30 21:37:54.211838540 +0000
	I0830 22:03:28.249829 1054224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:03:28.273075 1054224 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:03:28.273249 1054224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:03:28.311649 1054224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0830 22:03:28.311685 1054224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0830 22:03:28.311693 1054224 start.go:466] detecting cgroup driver to use...
	I0830 22:03:28.311724 1054224 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:03:28.311774 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:03:28.330826 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:03:28.345233 1054224 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:03:28.345302 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:03:28.363131 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:03:28.381351 1054224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:03:28.476095 1054224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:03:28.578531 1054224 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0830 22:03:28.578561 1054224 docker.go:212] disabling docker service ...
	I0830 22:03:28.578629 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:03:28.600456 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:03:28.614543 1054224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:03:28.709553 1054224 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0830 22:03:28.709627 1054224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:03:28.817389 1054224 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0830 22:03:28.817492 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:03:28.831021 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:03:28.849618 1054224 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 22:03:28.851034 1054224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:03:28.851096 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:03:28.863907 1054224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:03:28.863978 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:03:28.876323 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:03:28.888250 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:03:28.900408 1054224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:03:28.911750 1054224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:03:28.921026 1054224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 22:03:28.922243 1054224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:03:28.932457 1054224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:03:29.019976 1054224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:03:29.149261 1054224 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:03:29.149337 1054224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:03:29.155118 1054224 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 22:03:29.155139 1054224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 22:03:29.155147 1054224 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0830 22:03:29.155155 1054224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:03:29.155162 1054224 command_runner.go:130] > Access: 2023-08-30 22:03:29.133883706 +0000
	I0830 22:03:29.155169 1054224 command_runner.go:130] > Modify: 2023-08-30 22:03:29.133883706 +0000
	I0830 22:03:29.155181 1054224 command_runner.go:130] > Change: 2023-08-30 22:03:29.133883706 +0000
	I0830 22:03:29.155189 1054224 command_runner.go:130] >  Birth: -
	I0830 22:03:29.155433 1054224 start.go:534] Will wait 60s for crictl version
	I0830 22:03:29.155487 1054224 ssh_runner.go:195] Run: which crictl
	I0830 22:03:29.159818 1054224 command_runner.go:130] > /usr/bin/crictl
	I0830 22:03:29.160232 1054224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:03:29.200810 1054224 command_runner.go:130] > Version:  0.1.0
	I0830 22:03:29.200884 1054224 command_runner.go:130] > RuntimeName:  cri-o
	I0830 22:03:29.200903 1054224 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0830 22:03:29.200923 1054224 command_runner.go:130] > RuntimeApiVersion:  v1
	I0830 22:03:29.203720 1054224 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0830 22:03:29.203873 1054224 ssh_runner.go:195] Run: crio --version
	I0830 22:03:29.247463 1054224 command_runner.go:130] > crio version 1.24.6
	I0830 22:03:29.247527 1054224 command_runner.go:130] > Version:          1.24.6
	I0830 22:03:29.247549 1054224 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0830 22:03:29.247605 1054224 command_runner.go:130] > GitTreeState:     clean
	I0830 22:03:29.247627 1054224 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0830 22:03:29.247644 1054224 command_runner.go:130] > GoVersion:        go1.18.2
	I0830 22:03:29.247661 1054224 command_runner.go:130] > Compiler:         gc
	I0830 22:03:29.247693 1054224 command_runner.go:130] > Platform:         linux/arm64
	I0830 22:03:29.247715 1054224 command_runner.go:130] > Linkmode:         dynamic
	I0830 22:03:29.247739 1054224 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 22:03:29.247768 1054224 command_runner.go:130] > SeccompEnabled:   true
	I0830 22:03:29.247788 1054224 command_runner.go:130] > AppArmorEnabled:  false
	I0830 22:03:29.249248 1054224 ssh_runner.go:195] Run: crio --version
	I0830 22:03:29.294511 1054224 command_runner.go:130] > crio version 1.24.6
	I0830 22:03:29.294531 1054224 command_runner.go:130] > Version:          1.24.6
	I0830 22:03:29.294540 1054224 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0830 22:03:29.294546 1054224 command_runner.go:130] > GitTreeState:     clean
	I0830 22:03:29.294553 1054224 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0830 22:03:29.294558 1054224 command_runner.go:130] > GoVersion:        go1.18.2
	I0830 22:03:29.294563 1054224 command_runner.go:130] > Compiler:         gc
	I0830 22:03:29.294569 1054224 command_runner.go:130] > Platform:         linux/arm64
	I0830 22:03:29.294575 1054224 command_runner.go:130] > Linkmode:         dynamic
	I0830 22:03:29.294584 1054224 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 22:03:29.294592 1054224 command_runner.go:130] > SeccompEnabled:   true
	I0830 22:03:29.294601 1054224 command_runner.go:130] > AppArmorEnabled:  false
	I0830 22:03:29.299275 1054224 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0830 22:03:29.301011 1054224 cli_runner.go:164] Run: docker network inspect multinode-994875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:03:29.318855 1054224 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0830 22:03:29.323536 1054224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:03:29.336921 1054224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:03:29.336995 1054224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:03:29.397468 1054224 command_runner.go:130] > {
	I0830 22:03:29.397485 1054224 command_runner.go:130] >   "images": [
	I0830 22:03:29.397490 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397500 1054224 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0830 22:03:29.397505 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397513 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0830 22:03:29.397517 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397522 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397533 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0830 22:03:29.397542 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0830 22:03:29.397547 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397552 1054224 command_runner.go:130] >       "size": "60881430",
	I0830 22:03:29.397557 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.397562 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.397571 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.397576 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.397580 1054224 command_runner.go:130] >     },
	I0830 22:03:29.397584 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397592 1054224 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0830 22:03:29.397597 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397604 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0830 22:03:29.397608 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397613 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397623 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0830 22:03:29.397633 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0830 22:03:29.397637 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397645 1054224 command_runner.go:130] >       "size": "29037500",
	I0830 22:03:29.397650 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.397655 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.397660 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.397665 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.397669 1054224 command_runner.go:130] >     },
	I0830 22:03:29.397673 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397680 1054224 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0830 22:03:29.397686 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397693 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0830 22:03:29.397697 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397702 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397711 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0830 22:03:29.397721 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0830 22:03:29.397725 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397730 1054224 command_runner.go:130] >       "size": "51393451",
	I0830 22:03:29.397735 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.397742 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.397746 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.397756 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.397760 1054224 command_runner.go:130] >     },
	I0830 22:03:29.397764 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397772 1054224 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0830 22:03:29.397777 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397783 1054224 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0830 22:03:29.397787 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397792 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397800 1054224 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0830 22:03:29.397809 1054224 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0830 22:03:29.397818 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397823 1054224 command_runner.go:130] >       "size": "182203183",
	I0830 22:03:29.397828 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.397832 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.397836 1054224 command_runner.go:130] >       },
	I0830 22:03:29.397841 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.397846 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.397851 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.397855 1054224 command_runner.go:130] >     },
	I0830 22:03:29.397859 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397867 1054224 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0830 22:03:29.397871 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397877 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0830 22:03:29.397882 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397886 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397896 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0830 22:03:29.397905 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0830 22:03:29.397909 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397915 1054224 command_runner.go:130] >       "size": "120857550",
	I0830 22:03:29.397919 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.397924 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.397928 1054224 command_runner.go:130] >       },
	I0830 22:03:29.397933 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.397938 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.397942 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.397946 1054224 command_runner.go:130] >     },
	I0830 22:03:29.397951 1054224 command_runner.go:130] >     {
	I0830 22:03:29.397958 1054224 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0830 22:03:29.397963 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.397970 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0830 22:03:29.397974 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.397979 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.397989 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0830 22:03:29.397999 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0830 22:03:29.398004 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398009 1054224 command_runner.go:130] >       "size": "117187378",
	I0830 22:03:29.398014 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.398019 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.398023 1054224 command_runner.go:130] >       },
	I0830 22:03:29.398027 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.398032 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.398037 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.398040 1054224 command_runner.go:130] >     },
	I0830 22:03:29.398044 1054224 command_runner.go:130] >     {
	I0830 22:03:29.398052 1054224 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0830 22:03:29.398057 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.398063 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0830 22:03:29.398067 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398072 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.398081 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0830 22:03:29.398090 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0830 22:03:29.398095 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398100 1054224 command_runner.go:130] >       "size": "69926807",
	I0830 22:03:29.398105 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.398110 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.398114 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.398119 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.398123 1054224 command_runner.go:130] >     },
	I0830 22:03:29.398127 1054224 command_runner.go:130] >     {
	I0830 22:03:29.398135 1054224 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0830 22:03:29.398139 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.398145 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0830 22:03:29.398149 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398154 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.398182 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0830 22:03:29.398192 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0830 22:03:29.398196 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398202 1054224 command_runner.go:130] >       "size": "59188020",
	I0830 22:03:29.398206 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.398211 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.398215 1054224 command_runner.go:130] >       },
	I0830 22:03:29.398220 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.398225 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.398230 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.398234 1054224 command_runner.go:130] >     },
	I0830 22:03:29.398239 1054224 command_runner.go:130] >     {
	I0830 22:03:29.398246 1054224 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0830 22:03:29.398251 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.398257 1054224 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0830 22:03:29.398261 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398266 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.398276 1054224 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0830 22:03:29.398286 1054224 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0830 22:03:29.398290 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.398296 1054224 command_runner.go:130] >       "size": "520014",
	I0830 22:03:29.398300 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.398305 1054224 command_runner.go:130] >         "value": "65535"
	I0830 22:03:29.398310 1054224 command_runner.go:130] >       },
	I0830 22:03:29.398315 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.398320 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.398325 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.398329 1054224 command_runner.go:130] >     }
	I0830 22:03:29.398333 1054224 command_runner.go:130] >   ]
	I0830 22:03:29.398337 1054224 command_runner.go:130] > }
	I0830 22:03:29.400327 1054224 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:03:29.400384 1054224 crio.go:415] Images already preloaded, skipping extraction
	I0830 22:03:29.400464 1054224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0830 22:03:29.442869 1054224 command_runner.go:130] > {
	I0830 22:03:29.442889 1054224 command_runner.go:130] >   "images": [
	I0830 22:03:29.442894 1054224 command_runner.go:130] >     {
	I0830 22:03:29.442904 1054224 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0830 22:03:29.442909 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.442916 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0830 22:03:29.442921 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.442927 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.442941 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0830 22:03:29.442954 1054224 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0830 22:03:29.442959 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.442966 1054224 command_runner.go:130] >       "size": "60881430",
	I0830 22:03:29.442972 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.442979 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.442989 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.442997 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443001 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443008 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443015 1054224 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0830 22:03:29.443023 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443031 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0830 22:03:29.443038 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443043 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443052 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0830 22:03:29.443064 1054224 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0830 22:03:29.443068 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443074 1054224 command_runner.go:130] >       "size": "29037500",
	I0830 22:03:29.443079 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.443088 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443096 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443104 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443109 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443114 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443127 1054224 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0830 22:03:29.443132 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443141 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0830 22:03:29.443147 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443153 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443163 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0830 22:03:29.443176 1054224 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0830 22:03:29.443180 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443188 1054224 command_runner.go:130] >       "size": "51393451",
	I0830 22:03:29.443193 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.443198 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443206 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443210 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443215 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443220 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443230 1054224 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0830 22:03:29.443237 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443244 1054224 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0830 22:03:29.443249 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443257 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443266 1054224 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0830 22:03:29.443278 1054224 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0830 22:03:29.443286 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443294 1054224 command_runner.go:130] >       "size": "182203183",
	I0830 22:03:29.443299 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.443304 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.443309 1054224 command_runner.go:130] >       },
	I0830 22:03:29.443317 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443328 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443333 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443337 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443344 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443353 1054224 command_runner.go:130] >       "id": "b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a",
	I0830 22:03:29.443360 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443366 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0830 22:03:29.443382 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443390 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443399 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c",
	I0830 22:03:29.443413 1054224 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0830 22:03:29.443417 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443425 1054224 command_runner.go:130] >       "size": "120857550",
	I0830 22:03:29.443430 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.443435 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.443442 1054224 command_runner.go:130] >       },
	I0830 22:03:29.443447 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443453 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443461 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443465 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443470 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443478 1054224 command_runner.go:130] >       "id": "8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965",
	I0830 22:03:29.443485 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443492 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0830 22:03:29.443499 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443504 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443514 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f",
	I0830 22:03:29.443527 1054224 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0830 22:03:29.443531 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443539 1054224 command_runner.go:130] >       "size": "117187378",
	I0830 22:03:29.443543 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.443551 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.443555 1054224 command_runner.go:130] >       },
	I0830 22:03:29.443562 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443579 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443586 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443591 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443596 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443606 1054224 command_runner.go:130] >       "id": "812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26",
	I0830 22:03:29.443611 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443623 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0830 22:03:29.443628 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443635 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443644 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c",
	I0830 22:03:29.443656 1054224 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"
	I0830 22:03:29.443660 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443666 1054224 command_runner.go:130] >       "size": "69926807",
	I0830 22:03:29.443673 1054224 command_runner.go:130] >       "uid": null,
	I0830 22:03:29.443678 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443687 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443692 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443700 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443704 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443713 1054224 command_runner.go:130] >       "id": "b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87",
	I0830 22:03:29.443722 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443728 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0830 22:03:29.443735 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443740 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443775 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d",
	I0830 22:03:29.443790 1054224 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"
	I0830 22:03:29.443795 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443803 1054224 command_runner.go:130] >       "size": "59188020",
	I0830 22:03:29.443808 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.443813 1054224 command_runner.go:130] >         "value": "0"
	I0830 22:03:29.443820 1054224 command_runner.go:130] >       },
	I0830 22:03:29.443825 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443829 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443834 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443841 1054224 command_runner.go:130] >     },
	I0830 22:03:29.443845 1054224 command_runner.go:130] >     {
	I0830 22:03:29.443855 1054224 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0830 22:03:29.443860 1054224 command_runner.go:130] >       "repoTags": [
	I0830 22:03:29.443868 1054224 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0830 22:03:29.443873 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443878 1054224 command_runner.go:130] >       "repoDigests": [
	I0830 22:03:29.443889 1054224 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0830 22:03:29.443899 1054224 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0830 22:03:29.443907 1054224 command_runner.go:130] >       ],
	I0830 22:03:29.443912 1054224 command_runner.go:130] >       "size": "520014",
	I0830 22:03:29.443917 1054224 command_runner.go:130] >       "uid": {
	I0830 22:03:29.443922 1054224 command_runner.go:130] >         "value": "65535"
	I0830 22:03:29.443931 1054224 command_runner.go:130] >       },
	I0830 22:03:29.443937 1054224 command_runner.go:130] >       "username": "",
	I0830 22:03:29.443942 1054224 command_runner.go:130] >       "spec": null,
	I0830 22:03:29.443949 1054224 command_runner.go:130] >       "pinned": false
	I0830 22:03:29.443954 1054224 command_runner.go:130] >     }
	I0830 22:03:29.443958 1054224 command_runner.go:130] >   ]
	I0830 22:03:29.443964 1054224 command_runner.go:130] > }
	I0830 22:03:29.446795 1054224 crio.go:496] all images are preloaded for cri-o runtime.
	I0830 22:03:29.446842 1054224 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:03:29.446942 1054224 ssh_runner.go:195] Run: crio config
	I0830 22:03:29.497736 1054224 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 22:03:29.497760 1054224 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 22:03:29.497770 1054224 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 22:03:29.497774 1054224 command_runner.go:130] > #
	I0830 22:03:29.497791 1054224 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 22:03:29.497802 1054224 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 22:03:29.497812 1054224 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 22:03:29.497833 1054224 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 22:03:29.497838 1054224 command_runner.go:130] > # reload'.
	I0830 22:03:29.497848 1054224 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 22:03:29.497858 1054224 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 22:03:29.497866 1054224 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 22:03:29.497875 1054224 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 22:03:29.497882 1054224 command_runner.go:130] > [crio]
	I0830 22:03:29.497889 1054224 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 22:03:29.497897 1054224 command_runner.go:130] > # containers images, in this directory.
	I0830 22:03:29.498717 1054224 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0830 22:03:29.498741 1054224 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 22:03:29.499417 1054224 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0830 22:03:29.499437 1054224 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 22:03:29.499449 1054224 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 22:03:29.500168 1054224 command_runner.go:130] > # storage_driver = "vfs"
	I0830 22:03:29.500194 1054224 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 22:03:29.500205 1054224 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 22:03:29.500546 1054224 command_runner.go:130] > # storage_option = [
	I0830 22:03:29.500920 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.500937 1054224 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 22:03:29.500951 1054224 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 22:03:29.501659 1054224 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 22:03:29.501676 1054224 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 22:03:29.501684 1054224 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 22:03:29.501694 1054224 command_runner.go:130] > # always happen on a node reboot
	I0830 22:03:29.502363 1054224 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 22:03:29.502379 1054224 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 22:03:29.502387 1054224 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 22:03:29.502402 1054224 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 22:03:29.503098 1054224 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 22:03:29.503116 1054224 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 22:03:29.503127 1054224 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 22:03:29.503813 1054224 command_runner.go:130] > # internal_wipe = true
	I0830 22:03:29.503828 1054224 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 22:03:29.503837 1054224 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 22:03:29.503846 1054224 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 22:03:29.504514 1054224 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 22:03:29.504542 1054224 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 22:03:29.504553 1054224 command_runner.go:130] > [crio.api]
	I0830 22:03:29.504560 1054224 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 22:03:29.505303 1054224 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 22:03:29.505324 1054224 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 22:03:29.506112 1054224 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 22:03:29.506135 1054224 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 22:03:29.506142 1054224 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 22:03:29.506912 1054224 command_runner.go:130] > # stream_port = "0"
	I0830 22:03:29.506933 1054224 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 22:03:29.507358 1054224 command_runner.go:130] > # stream_enable_tls = false
	I0830 22:03:29.507374 1054224 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 22:03:29.507616 1054224 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 22:03:29.507633 1054224 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 22:03:29.507641 1054224 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 22:03:29.507646 1054224 command_runner.go:130] > # minutes.
	I0830 22:03:29.508957 1054224 command_runner.go:130] > # stream_tls_cert = ""
	I0830 22:03:29.508976 1054224 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 22:03:29.508984 1054224 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 22:03:29.508993 1054224 command_runner.go:130] > # stream_tls_key = ""
	I0830 22:03:29.509007 1054224 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 22:03:29.509015 1054224 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 22:03:29.509024 1054224 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 22:03:29.509029 1054224 command_runner.go:130] > # stream_tls_ca = ""
	I0830 22:03:29.509038 1054224 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 22:03:29.509044 1054224 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0830 22:03:29.509055 1054224 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 22:03:29.509063 1054224 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0830 22:03:29.509077 1054224 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 22:03:29.509087 1054224 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 22:03:29.509091 1054224 command_runner.go:130] > [crio.runtime]
	I0830 22:03:29.509101 1054224 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 22:03:29.509107 1054224 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 22:03:29.509112 1054224 command_runner.go:130] > # "nofile=1024:2048"
	I0830 22:03:29.509123 1054224 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 22:03:29.509144 1054224 command_runner.go:130] > # default_ulimits = [
	I0830 22:03:29.509150 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509160 1054224 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 22:03:29.509170 1054224 command_runner.go:130] > # no_pivot = false
	I0830 22:03:29.509177 1054224 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 22:03:29.509188 1054224 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 22:03:29.509194 1054224 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 22:03:29.509201 1054224 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 22:03:29.509207 1054224 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 22:03:29.509215 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 22:03:29.509222 1054224 command_runner.go:130] > # conmon = ""
	I0830 22:03:29.509227 1054224 command_runner.go:130] > # Cgroup setting for conmon
	I0830 22:03:29.509235 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 22:03:29.509240 1054224 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 22:03:29.509250 1054224 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 22:03:29.509259 1054224 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 22:03:29.509267 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 22:03:29.509276 1054224 command_runner.go:130] > # conmon_env = [
	I0830 22:03:29.509280 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509286 1054224 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 22:03:29.509292 1054224 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 22:03:29.509299 1054224 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 22:03:29.509304 1054224 command_runner.go:130] > # default_env = [
	I0830 22:03:29.509307 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509314 1054224 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 22:03:29.509319 1054224 command_runner.go:130] > # selinux = false
	I0830 22:03:29.509326 1054224 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 22:03:29.509333 1054224 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 22:03:29.509340 1054224 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 22:03:29.509345 1054224 command_runner.go:130] > # seccomp_profile = ""
	I0830 22:03:29.509352 1054224 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 22:03:29.509361 1054224 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 22:03:29.509369 1054224 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 22:03:29.509377 1054224 command_runner.go:130] > # which might increase security.
	I0830 22:03:29.509400 1054224 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0830 22:03:29.509414 1054224 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 22:03:29.509421 1054224 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 22:03:29.509429 1054224 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 22:03:29.509440 1054224 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 22:03:29.509449 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:03:29.509454 1054224 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 22:03:29.509463 1054224 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 22:03:29.509472 1054224 command_runner.go:130] > # the cgroup blockio controller.
	I0830 22:03:29.509477 1054224 command_runner.go:130] > # blockio_config_file = ""
	I0830 22:03:29.509485 1054224 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 22:03:29.509493 1054224 command_runner.go:130] > # irqbalance daemon.
	I0830 22:03:29.509499 1054224 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 22:03:29.509507 1054224 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 22:03:29.509513 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:03:29.509520 1054224 command_runner.go:130] > # rdt_config_file = ""
	I0830 22:03:29.509527 1054224 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 22:03:29.509534 1054224 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 22:03:29.509541 1054224 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 22:03:29.509548 1054224 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 22:03:29.509558 1054224 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 22:03:29.509566 1054224 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 22:03:29.509573 1054224 command_runner.go:130] > # will be added.
	I0830 22:03:29.509578 1054224 command_runner.go:130] > # default_capabilities = [
	I0830 22:03:29.509583 1054224 command_runner.go:130] > # 	"CHOWN",
	I0830 22:03:29.509587 1054224 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 22:03:29.509592 1054224 command_runner.go:130] > # 	"FSETID",
	I0830 22:03:29.509596 1054224 command_runner.go:130] > # 	"FOWNER",
	I0830 22:03:29.509604 1054224 command_runner.go:130] > # 	"SETGID",
	I0830 22:03:29.509611 1054224 command_runner.go:130] > # 	"SETUID",
	I0830 22:03:29.509617 1054224 command_runner.go:130] > # 	"SETPCAP",
	I0830 22:03:29.509630 1054224 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 22:03:29.509635 1054224 command_runner.go:130] > # 	"KILL",
	I0830 22:03:29.509638 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509648 1054224 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0830 22:03:29.509659 1054224 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0830 22:03:29.509665 1054224 command_runner.go:130] > # add_inheritable_capabilities = true
	I0830 22:03:29.509672 1054224 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 22:03:29.509680 1054224 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 22:03:29.509687 1054224 command_runner.go:130] > # default_sysctls = [
	I0830 22:03:29.509690 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509699 1054224 command_runner.go:130] > # List of devices on the host that a
	I0830 22:03:29.509709 1054224 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 22:03:29.509717 1054224 command_runner.go:130] > # allowed_devices = [
	I0830 22:03:29.509722 1054224 command_runner.go:130] > # 	"/dev/fuse",
	I0830 22:03:29.509726 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509732 1054224 command_runner.go:130] > # List of additional devices. specified as
	I0830 22:03:29.509750 1054224 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 22:03:29.509757 1054224 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 22:03:29.509765 1054224 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 22:03:29.509782 1054224 command_runner.go:130] > # additional_devices = [
	I0830 22:03:29.509786 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509792 1054224 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 22:03:29.509799 1054224 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 22:03:29.509804 1054224 command_runner.go:130] > # 	"/etc/cdi",
	I0830 22:03:29.509810 1054224 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 22:03:29.509816 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509826 1054224 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 22:03:29.509833 1054224 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 22:03:29.509838 1054224 command_runner.go:130] > # Defaults to false.
	I0830 22:03:29.509847 1054224 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 22:03:29.509857 1054224 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 22:03:29.509868 1054224 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 22:03:29.509875 1054224 command_runner.go:130] > # hooks_dir = [
	I0830 22:03:29.509881 1054224 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 22:03:29.509885 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.509893 1054224 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 22:03:29.509903 1054224 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 22:03:29.509910 1054224 command_runner.go:130] > # its default mounts from the following two files:
	I0830 22:03:29.509914 1054224 command_runner.go:130] > #
	I0830 22:03:29.509922 1054224 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 22:03:29.509932 1054224 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 22:03:29.509948 1054224 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 22:03:29.509952 1054224 command_runner.go:130] > #
	I0830 22:03:29.509960 1054224 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 22:03:29.509970 1054224 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 22:03:29.509978 1054224 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 22:03:29.509986 1054224 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 22:03:29.509990 1054224 command_runner.go:130] > #
	I0830 22:03:29.509995 1054224 command_runner.go:130] > # default_mounts_file = ""
	I0830 22:03:29.510002 1054224 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 22:03:29.510009 1054224 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 22:03:29.510014 1054224 command_runner.go:130] > # pids_limit = 0
	I0830 22:03:29.510022 1054224 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 22:03:29.510030 1054224 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 22:03:29.510037 1054224 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 22:03:29.510047 1054224 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 22:03:29.510051 1054224 command_runner.go:130] > # log_size_max = -1
	I0830 22:03:29.510077 1054224 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 22:03:29.510090 1054224 command_runner.go:130] > # log_to_journald = false
	I0830 22:03:29.510097 1054224 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 22:03:29.510106 1054224 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 22:03:29.510118 1054224 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 22:03:29.510124 1054224 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 22:03:29.510131 1054224 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 22:03:29.510139 1054224 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 22:03:29.510146 1054224 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 22:03:29.510153 1054224 command_runner.go:130] > # read_only = false
	I0830 22:03:29.510160 1054224 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 22:03:29.510170 1054224 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 22:03:29.510175 1054224 command_runner.go:130] > # live configuration reload.
	I0830 22:03:29.510183 1054224 command_runner.go:130] > # log_level = "info"
	I0830 22:03:29.510190 1054224 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 22:03:29.510199 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:03:29.510204 1054224 command_runner.go:130] > # log_filter = ""
	I0830 22:03:29.510211 1054224 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 22:03:29.510222 1054224 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 22:03:29.510228 1054224 command_runner.go:130] > # separated by comma.
	I0830 22:03:29.510236 1054224 command_runner.go:130] > # uid_mappings = ""
	I0830 22:03:29.510244 1054224 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 22:03:29.510253 1054224 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 22:03:29.510259 1054224 command_runner.go:130] > # separated by comma.
	I0830 22:03:29.510264 1054224 command_runner.go:130] > # gid_mappings = ""
	I0830 22:03:29.510274 1054224 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 22:03:29.510281 1054224 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 22:03:29.510291 1054224 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 22:03:29.510296 1054224 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 22:03:29.510303 1054224 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 22:03:29.510311 1054224 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 22:03:29.510321 1054224 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 22:03:29.510326 1054224 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 22:03:29.510336 1054224 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 22:03:29.510343 1054224 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 22:03:29.510350 1054224 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 22:03:29.510357 1054224 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 22:03:29.510364 1054224 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 22:03:29.510373 1054224 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 22:03:29.510381 1054224 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 22:03:29.510390 1054224 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 22:03:29.510395 1054224 command_runner.go:130] > # drop_infra_ctr = true
	I0830 22:03:29.510402 1054224 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 22:03:29.510412 1054224 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 22:03:29.510420 1054224 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 22:03:29.510428 1054224 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 22:03:29.510436 1054224 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 22:03:29.510442 1054224 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 22:03:29.510450 1054224 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 22:03:29.510458 1054224 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 22:03:29.510463 1054224 command_runner.go:130] > # pinns_path = ""
	I0830 22:03:29.510471 1054224 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 22:03:29.510481 1054224 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 22:03:29.510491 1054224 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 22:03:29.510496 1054224 command_runner.go:130] > # default_runtime = "runc"
	I0830 22:03:29.510502 1054224 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 22:03:29.510514 1054224 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 22:03:29.510525 1054224 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 22:03:29.510534 1054224 command_runner.go:130] > # creation as a file is not desired either.
	I0830 22:03:29.510545 1054224 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 22:03:29.510551 1054224 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 22:03:29.510557 1054224 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 22:03:29.510566 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.510574 1054224 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 22:03:29.510584 1054224 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 22:03:29.510594 1054224 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 22:03:29.510604 1054224 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 22:03:29.510609 1054224 command_runner.go:130] > #
	I0830 22:03:29.510616 1054224 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 22:03:29.510622 1054224 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 22:03:29.510631 1054224 command_runner.go:130] > #  runtime_type = "oci"
	I0830 22:03:29.510638 1054224 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 22:03:29.510645 1054224 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 22:03:29.510653 1054224 command_runner.go:130] > #  allowed_annotations = []
	I0830 22:03:29.510657 1054224 command_runner.go:130] > # Where:
	I0830 22:03:29.510685 1054224 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 22:03:29.510699 1054224 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 22:03:29.510710 1054224 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 22:03:29.510719 1054224 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 22:03:29.510727 1054224 command_runner.go:130] > #   in $PATH.
	I0830 22:03:29.510734 1054224 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 22:03:29.510743 1054224 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 22:03:29.510750 1054224 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 22:03:29.510754 1054224 command_runner.go:130] > #   state.
	I0830 22:03:29.510762 1054224 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 22:03:29.510772 1054224 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 22:03:29.510780 1054224 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 22:03:29.510790 1054224 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 22:03:29.510797 1054224 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 22:03:29.510805 1054224 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 22:03:29.510813 1054224 command_runner.go:130] > #   The currently recognized values are:
	I0830 22:03:29.510823 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 22:03:29.510831 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 22:03:29.510841 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 22:03:29.510849 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 22:03:29.510860 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 22:03:29.510871 1054224 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 22:03:29.510878 1054224 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 22:03:29.510886 1054224 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 22:03:29.510894 1054224 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 22:03:29.510900 1054224 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 22:03:29.510913 1054224 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0830 22:03:29.510918 1054224 command_runner.go:130] > runtime_type = "oci"
	I0830 22:03:29.510923 1054224 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 22:03:29.510931 1054224 command_runner.go:130] > runtime_config_path = ""
	I0830 22:03:29.510936 1054224 command_runner.go:130] > monitor_path = ""
	I0830 22:03:29.510941 1054224 command_runner.go:130] > monitor_cgroup = ""
	I0830 22:03:29.510948 1054224 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 22:03:29.510964 1054224 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 22:03:29.510972 1054224 command_runner.go:130] > # running containers
	I0830 22:03:29.510979 1054224 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 22:03:29.510988 1054224 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 22:03:29.511000 1054224 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 22:03:29.511007 1054224 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 22:03:29.511015 1054224 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 22:03:29.511022 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 22:03:29.511028 1054224 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 22:03:29.511036 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 22:03:29.511042 1054224 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 22:03:29.511047 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 22:03:29.511057 1054224 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 22:03:29.511066 1054224 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 22:03:29.511073 1054224 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 22:03:29.511082 1054224 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 22:03:29.511093 1054224 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 22:03:29.511100 1054224 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 22:03:29.511115 1054224 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 22:03:29.511124 1054224 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 22:03:29.511132 1054224 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 22:03:29.511144 1054224 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 22:03:29.511151 1054224 command_runner.go:130] > # Example:
	I0830 22:03:29.511157 1054224 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 22:03:29.511163 1054224 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 22:03:29.511171 1054224 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 22:03:29.511178 1054224 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 22:03:29.511185 1054224 command_runner.go:130] > # cpuset = 0
	I0830 22:03:29.511190 1054224 command_runner.go:130] > # cpushares = "0-1"
	I0830 22:03:29.511195 1054224 command_runner.go:130] > # Where:
	I0830 22:03:29.511201 1054224 command_runner.go:130] > # The workload name is workload-type.
	I0830 22:03:29.511212 1054224 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 22:03:29.511223 1054224 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 22:03:29.511230 1054224 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 22:03:29.511255 1054224 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 22:03:29.511269 1054224 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 22:03:29.511273 1054224 command_runner.go:130] > # 
	I0830 22:03:29.511281 1054224 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 22:03:29.511287 1054224 command_runner.go:130] > #
	I0830 22:03:29.511297 1054224 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 22:03:29.511308 1054224 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 22:03:29.511316 1054224 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 22:03:29.511324 1054224 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 22:03:29.511333 1054224 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 22:03:29.511337 1054224 command_runner.go:130] > [crio.image]
	I0830 22:03:29.511346 1054224 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 22:03:29.511354 1054224 command_runner.go:130] > # default_transport = "docker://"
	I0830 22:03:29.511364 1054224 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 22:03:29.511372 1054224 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 22:03:29.511377 1054224 command_runner.go:130] > # global_auth_file = ""
	I0830 22:03:29.511386 1054224 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 22:03:29.511394 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:03:29.511400 1054224 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 22:03:29.511410 1054224 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 22:03:29.511417 1054224 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 22:03:29.511424 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:03:29.511431 1054224 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 22:03:29.511440 1054224 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 22:03:29.511448 1054224 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 22:03:29.511458 1054224 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 22:03:29.511468 1054224 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 22:03:29.511482 1054224 command_runner.go:130] > # pause_command = "/pause"
	I0830 22:03:29.511490 1054224 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 22:03:29.511498 1054224 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 22:03:29.511509 1054224 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 22:03:29.511516 1054224 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 22:03:29.511526 1054224 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 22:03:29.511531 1054224 command_runner.go:130] > # signature_policy = ""
	I0830 22:03:29.511538 1054224 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 22:03:29.511546 1054224 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 22:03:29.511552 1054224 command_runner.go:130] > # changing them here.
	I0830 22:03:29.511558 1054224 command_runner.go:130] > # insecure_registries = [
	I0830 22:03:29.511564 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.511578 1054224 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 22:03:29.511585 1054224 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 22:03:29.511599 1054224 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 22:03:29.511606 1054224 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 22:03:29.511614 1054224 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 22:03:29.511621 1054224 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 22:03:29.511626 1054224 command_runner.go:130] > # CNI plugins.
	I0830 22:03:29.511633 1054224 command_runner.go:130] > [crio.network]
	I0830 22:03:29.511642 1054224 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 22:03:29.511649 1054224 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 22:03:29.511656 1054224 command_runner.go:130] > # cni_default_network = ""
	I0830 22:03:29.511663 1054224 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 22:03:29.511668 1054224 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 22:03:29.511677 1054224 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 22:03:29.511685 1054224 command_runner.go:130] > # plugin_dirs = [
	I0830 22:03:29.511694 1054224 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 22:03:29.511698 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.511705 1054224 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 22:03:29.511712 1054224 command_runner.go:130] > [crio.metrics]
	I0830 22:03:29.511720 1054224 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 22:03:29.511725 1054224 command_runner.go:130] > # enable_metrics = false
	I0830 22:03:29.511733 1054224 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 22:03:29.511739 1054224 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 22:03:29.511746 1054224 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 22:03:29.511756 1054224 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 22:03:29.511765 1054224 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 22:03:29.511770 1054224 command_runner.go:130] > # metrics_collectors = [
	I0830 22:03:29.512361 1054224 command_runner.go:130] > # 	"operations",
	I0830 22:03:29.512786 1054224 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 22:03:29.513227 1054224 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 22:03:29.513652 1054224 command_runner.go:130] > # 	"operations_errors",
	I0830 22:03:29.514068 1054224 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 22:03:29.514487 1054224 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 22:03:29.514894 1054224 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 22:03:29.515297 1054224 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 22:03:29.515770 1054224 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 22:03:29.516158 1054224 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 22:03:29.516580 1054224 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 22:03:29.517000 1054224 command_runner.go:130] > # 	"containers_oom_total",
	I0830 22:03:29.517435 1054224 command_runner.go:130] > # 	"containers_oom",
	I0830 22:03:29.517869 1054224 command_runner.go:130] > # 	"processes_defunct",
	I0830 22:03:29.518319 1054224 command_runner.go:130] > # 	"operations_total",
	I0830 22:03:29.518336 1054224 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 22:03:29.518565 1054224 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 22:03:29.518818 1054224 command_runner.go:130] > # 	"operations_errors_total",
	I0830 22:03:29.518833 1054224 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 22:03:29.519061 1054224 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 22:03:29.519088 1054224 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 22:03:29.519324 1054224 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 22:03:29.519338 1054224 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 22:03:29.519555 1054224 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 22:03:29.519580 1054224 command_runner.go:130] > # ]
	I0830 22:03:29.519587 1054224 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 22:03:29.519986 1054224 command_runner.go:130] > # metrics_port = 9090
	I0830 22:03:29.520002 1054224 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 22:03:29.520007 1054224 command_runner.go:130] > # metrics_socket = ""
	I0830 22:03:29.520014 1054224 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 22:03:29.520025 1054224 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 22:03:29.520033 1054224 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 22:03:29.520041 1054224 command_runner.go:130] > # certificate on any modification event.
	I0830 22:03:29.520046 1054224 command_runner.go:130] > # metrics_cert = ""
	I0830 22:03:29.520054 1054224 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 22:03:29.520063 1054224 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 22:03:29.520068 1054224 command_runner.go:130] > # metrics_key = ""
	I0830 22:03:29.520075 1054224 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 22:03:29.520082 1054224 command_runner.go:130] > [crio.tracing]
	I0830 22:03:29.520089 1054224 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 22:03:29.520097 1054224 command_runner.go:130] > # enable_tracing = false
	I0830 22:03:29.520104 1054224 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 22:03:29.520109 1054224 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 22:03:29.520115 1054224 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 22:03:29.520124 1054224 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 22:03:29.520133 1054224 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 22:03:29.520144 1054224 command_runner.go:130] > [crio.stats]
	I0830 22:03:29.520155 1054224 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 22:03:29.520161 1054224 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 22:03:29.520169 1054224 command_runner.go:130] > # stats_collection_period = 0
	I0830 22:03:29.522090 1054224 command_runner.go:130] ! time="2023-08-30 22:03:29.495218841Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0830 22:03:29.522113 1054224 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 22:03:29.522180 1054224 cni.go:84] Creating CNI manager for ""
	I0830 22:03:29.522188 1054224 cni.go:136] 1 nodes found, recommending kindnet
	I0830 22:03:29.522219 1054224 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:03:29.522240 1054224 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994875 NodeName:multinode-994875 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:03:29.522380 1054224 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994875"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:03:29.522448 1054224 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-994875 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:03:29.522515 1054224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:03:29.532471 1054224 command_runner.go:130] > kubeadm
	I0830 22:03:29.532656 1054224 command_runner.go:130] > kubectl
	I0830 22:03:29.532678 1054224 command_runner.go:130] > kubelet
	I0830 22:03:29.533642 1054224 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:03:29.533722 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:03:29.544853 1054224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0830 22:03:29.568076 1054224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:03:29.590012 1054224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0830 22:03:29.612251 1054224 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0830 22:03:29.617047 1054224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:03:29.631135 1054224 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875 for IP: 192.168.58.2
	I0830 22:03:29.631165 1054224 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1c893f087ee62e9f919bfa6a6de84891ee8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:29.631330 1054224 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key
	I0830 22:03:29.631374 1054224 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key
	I0830 22:03:29.631424 1054224 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key
	I0830 22:03:29.631437 1054224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt with IP's: []
	I0830 22:03:29.870788 1054224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt ...
	I0830 22:03:29.870822 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt: {Name:mk114dd93c0c18f0a5cb4dca5f6e9110ed6f6b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:29.871016 1054224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key ...
	I0830 22:03:29.871029 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key: {Name:mk938ac5d539692749b1187e1c6d0503ce4c8d15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:29.871121 1054224 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key.cee25041
	I0830 22:03:29.871136 1054224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 22:03:30.086997 1054224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt.cee25041 ...
	I0830 22:03:30.087035 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt.cee25041: {Name:mk2d1d7d0e1a1b447d3f5c9b7a4fec44c5e57b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:30.087261 1054224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key.cee25041 ...
	I0830 22:03:30.087276 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key.cee25041: {Name:mka273324f53850da2f6ba43e9c238c666698136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:30.087369 1054224 certs.go:337] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt
	I0830 22:03:30.087458 1054224 certs.go:341] copying /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key
	I0830 22:03:30.087552 1054224 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.key
	I0830 22:03:30.087568 1054224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.crt with IP's: []
	I0830 22:03:30.679149 1054224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.crt ...
	I0830 22:03:30.679183 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.crt: {Name:mk66bff3082a9645d007ebc2f565d5fada759de9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:30.679378 1054224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.key ...
	I0830 22:03:30.679391 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.key: {Name:mkac7c5a88f22d097e8aaf24f328b48029e81448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:03:30.679477 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 22:03:30.679497 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 22:03:30.679510 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 22:03:30.679525 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 22:03:30.679536 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 22:03:30.679553 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 22:03:30.679567 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 22:03:30.679581 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 22:03:30.679649 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem (1338 bytes)
	W0830 22:03:30.679696 1054224 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825_empty.pem, impossibly tiny 0 bytes
	I0830 22:03:30.679711 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 22:03:30.679736 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem (1082 bytes)
	I0830 22:03:30.679765 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:03:30.679798 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem (1679 bytes)
	I0830 22:03:30.679847 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:03:30.679874 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem -> /usr/share/ca-certificates/989825.pem
	I0830 22:03:30.679889 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /usr/share/ca-certificates/9898252.pem
	I0830 22:03:30.679900 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:03:30.680464 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:03:30.709833 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:03:30.739087 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:03:30.769042 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0830 22:03:30.797764 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:03:30.826252 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:03:30.855549 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:03:30.884688 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 22:03:30.914486 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem --> /usr/share/ca-certificates/989825.pem (1338 bytes)
	I0830 22:03:30.945282 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /usr/share/ca-certificates/9898252.pem (1708 bytes)
	I0830 22:03:30.975913 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:03:31.007069 1054224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:03:31.032343 1054224 ssh_runner.go:195] Run: openssl version
	I0830 22:03:31.041376 1054224 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0830 22:03:31.041770 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:03:31.055617 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:03:31.061183 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:03:31.061224 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:03:31.061278 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:03:31.070252 1054224 command_runner.go:130] > b5213941
	I0830 22:03:31.070677 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:03:31.083132 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989825.pem && ln -fs /usr/share/ca-certificates/989825.pem /etc/ssl/certs/989825.pem"
	I0830 22:03:31.095230 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989825.pem
	I0830 22:03:31.100063 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 22:03:31.100100 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 22:03:31.100153 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989825.pem
	I0830 22:03:31.109099 1054224 command_runner.go:130] > 51391683
	I0830 22:03:31.109698 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/989825.pem /etc/ssl/certs/51391683.0"
	I0830 22:03:31.122051 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9898252.pem && ln -fs /usr/share/ca-certificates/9898252.pem /etc/ssl/certs/9898252.pem"
	I0830 22:03:31.135168 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9898252.pem
	I0830 22:03:31.140270 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 22:03:31.140294 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 22:03:31.140364 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9898252.pem
	I0830 22:03:31.148832 1054224 command_runner.go:130] > 3ec20f2e
	I0830 22:03:31.149229 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9898252.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:03:31.161584 1054224 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:03:31.166107 1054224 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:03:31.166141 1054224 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:03:31.166211 1054224 kubeadm.go:404] StartCluster: {Name:multinode-994875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:03:31.166312 1054224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0830 22:03:31.166370 1054224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0830 22:03:31.210409 1054224 cri.go:89] found id: ""
	I0830 22:03:31.210479 1054224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:03:31.221155 1054224 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0830 22:03:31.221189 1054224 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0830 22:03:31.221198 1054224 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0830 22:03:31.221307 1054224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:03:31.232317 1054224 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0830 22:03:31.232427 1054224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:03:31.241972 1054224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0830 22:03:31.242044 1054224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0830 22:03:31.243214 1054224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0830 22:03:31.243258 1054224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:03:31.243312 1054224 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:03:31.243366 1054224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0830 22:03:31.303488 1054224 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:03:31.303561 1054224 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0830 22:03:31.303986 1054224 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:03:31.304026 1054224 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 22:03:31.353228 1054224 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0830 22:03:31.353257 1054224 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0830 22:03:31.353310 1054224 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1043-aws
	I0830 22:03:31.353321 1054224 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1043-aws
	I0830 22:03:31.353352 1054224 kubeadm.go:322] OS: Linux
	I0830 22:03:31.353364 1054224 command_runner.go:130] > OS: Linux
	I0830 22:03:31.353406 1054224 kubeadm.go:322] CGROUPS_CPU: enabled
	I0830 22:03:31.353416 1054224 command_runner.go:130] > CGROUPS_CPU: enabled
	I0830 22:03:31.353460 1054224 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0830 22:03:31.353468 1054224 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0830 22:03:31.353512 1054224 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0830 22:03:31.353521 1054224 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0830 22:03:31.353565 1054224 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0830 22:03:31.353573 1054224 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0830 22:03:31.353618 1054224 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0830 22:03:31.353626 1054224 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0830 22:03:31.353670 1054224 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0830 22:03:31.353679 1054224 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0830 22:03:31.353721 1054224 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0830 22:03:31.353730 1054224 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0830 22:03:31.353774 1054224 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0830 22:03:31.353783 1054224 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0830 22:03:31.353825 1054224 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0830 22:03:31.353835 1054224 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0830 22:03:31.445743 1054224 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:03:31.445773 1054224 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:03:31.445881 1054224 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:03:31.445893 1054224 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:03:31.445990 1054224 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:03:31.446008 1054224 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:03:31.714154 1054224 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:03:31.719160 1054224 out.go:204]   - Generating certificates and keys ...
	I0830 22:03:31.714496 1054224 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:03:31.719335 1054224 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:03:31.719362 1054224 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0830 22:03:31.719444 1054224 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0830 22:03:31.719452 1054224 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:03:32.229096 1054224 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:03:32.229150 1054224 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:03:32.488579 1054224 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:03:32.488609 1054224 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:03:33.228931 1054224 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 22:03:33.228957 1054224 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0830 22:03:33.796791 1054224 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 22:03:33.796815 1054224 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0830 22:03:34.389591 1054224 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 22:03:34.389629 1054224 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0830 22:03:34.390013 1054224 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-994875] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0830 22:03:34.390027 1054224 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-994875] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0830 22:03:34.756954 1054224 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 22:03:34.756978 1054224 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0830 22:03:34.757371 1054224 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-994875] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0830 22:03:34.757384 1054224 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-994875] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0830 22:03:35.367005 1054224 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:03:35.367030 1054224 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:03:35.786530 1054224 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:03:35.786553 1054224 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:03:36.294980 1054224 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 22:03:36.295009 1054224 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0830 22:03:36.295398 1054224 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:03:36.295412 1054224 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:03:37.203814 1054224 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:03:37.203843 1054224 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:03:37.618410 1054224 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:03:37.618436 1054224 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:03:37.996353 1054224 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:03:37.996378 1054224 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:03:38.361057 1054224 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:03:38.361086 1054224 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:03:38.361640 1054224 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:03:38.361656 1054224 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:03:38.364391 1054224 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:03:38.366636 1054224 out.go:204]   - Booting up control plane ...
	I0830 22:03:38.364481 1054224 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:03:38.366725 1054224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:03:38.366734 1054224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:03:38.366838 1054224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:03:38.366845 1054224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:03:38.367492 1054224 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:03:38.367506 1054224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:03:38.380622 1054224 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:03:38.380644 1054224 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:03:38.381881 1054224 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:03:38.381904 1054224 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:03:38.381941 1054224 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:03:38.381952 1054224 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 22:03:38.491490 1054224 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:03:38.491522 1054224 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:03:45.996467 1054224 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503681 seconds
	I0830 22:03:45.996492 1054224 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.503681 seconds
	I0830 22:03:45.996592 1054224 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:03:45.996598 1054224 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:03:46.032415 1054224 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:03:46.032441 1054224 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:03:46.569820 1054224 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:03:46.569845 1054224 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:03:46.570087 1054224 kubeadm.go:322] [mark-control-plane] Marking the node multinode-994875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:03:46.570106 1054224 command_runner.go:130] > [mark-control-plane] Marking the node multinode-994875 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:03:47.081303 1054224 kubeadm.go:322] [bootstrap-token] Using token: 69kz6y.abgh5bsfu4zc4oiq
	I0830 22:03:47.081328 1054224 command_runner.go:130] > [bootstrap-token] Using token: 69kz6y.abgh5bsfu4zc4oiq
	I0830 22:03:47.083168 1054224 out.go:204]   - Configuring RBAC rules ...
	I0830 22:03:47.083291 1054224 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:03:47.083308 1054224 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:03:47.089048 1054224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:03:47.089080 1054224 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:03:47.096971 1054224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:03:47.096999 1054224 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:03:47.101011 1054224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:03:47.101033 1054224 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:03:47.108163 1054224 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:03:47.108185 1054224 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:03:47.112984 1054224 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:03:47.113005 1054224 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:03:47.126536 1054224 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:03:47.126561 1054224 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:03:47.377590 1054224 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:03:47.377614 1054224 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0830 22:03:47.519323 1054224 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:03:47.519345 1054224 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0830 22:03:47.519350 1054224 kubeadm.go:322] 
	I0830 22:03:47.519406 1054224 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:03:47.519411 1054224 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0830 22:03:47.519415 1054224 kubeadm.go:322] 
	I0830 22:03:47.519487 1054224 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:03:47.519492 1054224 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0830 22:03:47.519496 1054224 kubeadm.go:322] 
	I0830 22:03:47.519521 1054224 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:03:47.519525 1054224 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0830 22:03:47.519580 1054224 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:03:47.519584 1054224 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:03:47.519631 1054224 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:03:47.519641 1054224 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:03:47.519645 1054224 kubeadm.go:322] 
	I0830 22:03:47.519696 1054224 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:03:47.519700 1054224 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0830 22:03:47.519704 1054224 kubeadm.go:322] 
	I0830 22:03:47.519749 1054224 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:03:47.519753 1054224 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:03:47.519757 1054224 kubeadm.go:322] 
	I0830 22:03:47.519806 1054224 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:03:47.519810 1054224 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0830 22:03:47.519880 1054224 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:03:47.519885 1054224 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:03:47.519958 1054224 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:03:47.519964 1054224 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:03:47.519968 1054224 kubeadm.go:322] 
	I0830 22:03:47.520047 1054224 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:03:47.520051 1054224 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:03:47.520123 1054224 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:03:47.520128 1054224 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0830 22:03:47.520132 1054224 kubeadm.go:322] 
	I0830 22:03:47.520211 1054224 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 69kz6y.abgh5bsfu4zc4oiq \
	I0830 22:03:47.520215 1054224 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 69kz6y.abgh5bsfu4zc4oiq \
	I0830 22:03:47.520311 1054224 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 \
	I0830 22:03:47.520316 1054224 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 \
	I0830 22:03:47.520335 1054224 kubeadm.go:322] 	--control-plane 
	I0830 22:03:47.520339 1054224 command_runner.go:130] > 	--control-plane 
	I0830 22:03:47.520343 1054224 kubeadm.go:322] 
	I0830 22:03:47.520423 1054224 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:03:47.520431 1054224 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:03:47.520435 1054224 kubeadm.go:322] 
	I0830 22:03:47.520512 1054224 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 69kz6y.abgh5bsfu4zc4oiq \
	I0830 22:03:47.520516 1054224 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 69kz6y.abgh5bsfu4zc4oiq \
	I0830 22:03:47.520611 1054224 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 
	I0830 22:03:47.520615 1054224 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 
	I0830 22:03:47.522894 1054224 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 22:03:47.522917 1054224 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 22:03:47.523016 1054224 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:03:47.523022 1054224 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:03:47.523037 1054224 cni.go:84] Creating CNI manager for ""
	I0830 22:03:47.523052 1054224 cni.go:136] 1 nodes found, recommending kindnet
	I0830 22:03:47.526126 1054224 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0830 22:03:47.527772 1054224 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 22:03:47.537620 1054224 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 22:03:47.537640 1054224 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0830 22:03:47.537648 1054224 command_runner.go:130] > Device: 3ah/58d	Inode: 1305245     Links: 1
	I0830 22:03:47.537656 1054224 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:03:47.537667 1054224 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0830 22:03:47.537673 1054224 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0830 22:03:47.537682 1054224 command_runner.go:130] > Change: 2023-08-30 21:37:54.859837350 +0000
	I0830 22:03:47.537688 1054224 command_runner.go:130] >  Birth: 2023-08-30 21:37:54.819837423 +0000
	I0830 22:03:47.538216 1054224 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 22:03:47.538232 1054224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 22:03:47.573894 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 22:03:48.445583 1054224 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0830 22:03:48.452367 1054224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0830 22:03:48.466483 1054224 command_runner.go:130] > serviceaccount/kindnet created
	I0830 22:03:48.479045 1054224 command_runner.go:130] > daemonset.apps/kindnet created
	I0830 22:03:48.485566 1054224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:03:48.485687 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:48.485772 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588 minikube.k8s.io/name=multinode-994875 minikube.k8s.io/updated_at=2023_08_30T22_03_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:48.693351 1054224 command_runner.go:130] > node/multinode-994875 labeled
	I0830 22:03:48.697204 1054224 command_runner.go:130] > -16
	I0830 22:03:48.697229 1054224 ops.go:34] apiserver oom_adj: -16
	I0830 22:03:48.697268 1054224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0830 22:03:48.697337 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:48.852124 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:48.852208 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:48.949029 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:49.449740 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:49.550609 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:49.949200 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:50.044468 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:50.449981 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:50.545496 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:50.949696 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:51.046640 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:51.449293 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:51.540808 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:51.949264 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:52.038968 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:52.449295 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:52.539894 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:52.949285 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:53.041259 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:53.449848 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:53.546541 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:53.949959 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:54.044562 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:54.449782 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:54.542392 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:54.949973 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:55.058478 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:55.450071 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:55.546689 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:55.949273 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:56.039537 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:56.449395 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:56.543860 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:56.949972 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:57.046274 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:57.449808 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:57.548866 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:57.949269 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:58.053186 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:58.449715 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:58.548432 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:58.950074 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:59.046586 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:59.449195 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:03:59.562655 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:03:59.949244 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:04:00.379992 1054224 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0830 22:04:00.449417 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:04:00.614349 1054224 command_runner.go:130] > NAME      SECRETS   AGE
	I0830 22:04:00.614366 1054224 command_runner.go:130] > default   0         0s
	I0830 22:04:00.618020 1054224 kubeadm.go:1081] duration metric: took 12.132415533s to wait for elevateKubeSystemPrivileges.
	I0830 22:04:00.618053 1054224 kubeadm.go:406] StartCluster complete in 29.451848937s
	I0830 22:04:00.618082 1054224 settings.go:142] acquiring lock: {Name:mkc3addaaa213f1dd8b8b58d94d3f946bbcb1099 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:04:00.618187 1054224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:00.618908 1054224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/kubeconfig: {Name:mk735c90eaee551cc7c6cf5c5ad3cfbf98dfe457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:04:00.619456 1054224 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:00.619704 1054224 kapi.go:59] client config for multinode-994875: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:04:00.619939 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:04:00.620228 1054224 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:04:00.620421 1054224 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 22:04:00.620610 1054224 addons.go:69] Setting storage-provisioner=true in profile "multinode-994875"
	I0830 22:04:00.620629 1054224 addons.go:231] Setting addon storage-provisioner=true in "multinode-994875"
	I0830 22:04:00.620695 1054224 host.go:66] Checking if "multinode-994875" exists ...
	I0830 22:04:00.621222 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:04:00.621639 1054224 addons.go:69] Setting default-storageclass=true in profile "multinode-994875"
	I0830 22:04:00.621659 1054224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-994875"
	I0830 22:04:00.621920 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:04:00.623174 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 22:04:00.623221 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.623241 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.623278 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.623554 1054224 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 22:04:00.680123 1054224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:04:00.675480 1054224 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:00.680055 1054224 round_trippers.go:574] Response Status: 200 OK in 56 milliseconds
	I0830 22:04:00.682137 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.682152 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.682161 1054224 round_trippers.go:580]     Audit-Id: 5a6c2600-f07d-4ac2-b6f9-accbca8dc91c
	I0830 22:04:00.682168 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.682175 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.682183 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.682189 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.682197 1054224 round_trippers.go:580]     Content-Length: 291
	I0830 22:04:00.682226 1054224 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8bcc123f-4915-4961-a683-1857b6b65ea4","resourceVersion":"374","creationTimestamp":"2023-08-30T22:03:47Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 22:04:00.682652 1054224 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8bcc123f-4915-4961-a683-1857b6b65ea4","resourceVersion":"374","creationTimestamp":"2023-08-30T22:03:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 22:04:00.682706 1054224 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 22:04:00.682710 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.682718 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.682725 1054224 round_trippers.go:473]     Content-Type: application/json
	I0830 22:04:00.682733 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.682931 1054224 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:04:00.682942 1054224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:04:00.683033 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:04:00.683441 1054224 kapi.go:59] client config for multinode-994875: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:04:00.683765 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0830 22:04:00.683775 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.683783 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.683792 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.709707 1054224 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0830 22:04:00.709729 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.709737 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.709744 1054224 round_trippers.go:580]     Content-Length: 109
	I0830 22:04:00.709751 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.709757 1054224 round_trippers.go:580]     Audit-Id: 30ea59d0-a07e-47b6-a75c-06449916fc39
	I0830 22:04:00.709764 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.709771 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.709777 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.710184 1054224 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"389"},"items":[]}
	I0830 22:04:00.710463 1054224 addons.go:231] Setting addon default-storageclass=true in "multinode-994875"
	I0830 22:04:00.710494 1054224 host.go:66] Checking if "multinode-994875" exists ...
	I0830 22:04:00.710930 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:04:00.729613 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:04:00.736328 1054224 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0830 22:04:00.736350 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.736359 1054224 round_trippers.go:580]     Audit-Id: ee37e942-5df2-493f-9653-49cfe1c56762
	I0830 22:04:00.736366 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.736373 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.736379 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.736386 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.736393 1054224 round_trippers.go:580]     Content-Length: 291
	I0830 22:04:00.736399 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.736423 1054224 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8bcc123f-4915-4961-a683-1857b6b65ea4","resourceVersion":"390","creationTimestamp":"2023-08-30T22:03:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 22:04:00.736556 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 22:04:00.736563 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.736570 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.736577 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.745956 1054224 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:04:00.745976 1054224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:04:00.746034 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:04:00.774235 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:04:00.783229 1054224 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0830 22:04:00.783249 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.783258 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.783266 1054224 round_trippers.go:580]     Audit-Id: 54057d10-b95d-4bc1-a5dc-c933f878c0bd
	I0830 22:04:00.783272 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.783279 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.783286 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.783292 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.783299 1054224 round_trippers.go:580]     Content-Length: 291
	I0830 22:04:00.800999 1054224 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8bcc123f-4915-4961-a683-1857b6b65ea4","resourceVersion":"390","creationTimestamp":"2023-08-30T22:03:47Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0830 22:04:00.801120 1054224 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-994875" context rescaled to 1 replicas
	I0830 22:04:00.801198 1054224 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0830 22:04:00.803650 1054224 out.go:177] * Verifying Kubernetes components...
	I0830 22:04:00.805607 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:04:00.898047 1054224 command_runner.go:130] > apiVersion: v1
	I0830 22:04:00.898069 1054224 command_runner.go:130] > data:
	I0830 22:04:00.898075 1054224 command_runner.go:130] >   Corefile: |
	I0830 22:04:00.898079 1054224 command_runner.go:130] >     .:53 {
	I0830 22:04:00.898084 1054224 command_runner.go:130] >         errors
	I0830 22:04:00.898097 1054224 command_runner.go:130] >         health {
	I0830 22:04:00.898106 1054224 command_runner.go:130] >            lameduck 5s
	I0830 22:04:00.898111 1054224 command_runner.go:130] >         }
	I0830 22:04:00.898122 1054224 command_runner.go:130] >         ready
	I0830 22:04:00.898129 1054224 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0830 22:04:00.898140 1054224 command_runner.go:130] >            pods insecure
	I0830 22:04:00.898147 1054224 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0830 22:04:00.898156 1054224 command_runner.go:130] >            ttl 30
	I0830 22:04:00.898162 1054224 command_runner.go:130] >         }
	I0830 22:04:00.898177 1054224 command_runner.go:130] >         prometheus :9153
	I0830 22:04:00.898182 1054224 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0830 22:04:00.898188 1054224 command_runner.go:130] >            max_concurrent 1000
	I0830 22:04:00.898194 1054224 command_runner.go:130] >         }
	I0830 22:04:00.898199 1054224 command_runner.go:130] >         cache 30
	I0830 22:04:00.898208 1054224 command_runner.go:130] >         loop
	I0830 22:04:00.898213 1054224 command_runner.go:130] >         reload
	I0830 22:04:00.898218 1054224 command_runner.go:130] >         loadbalance
	I0830 22:04:00.898227 1054224 command_runner.go:130] >     }
	I0830 22:04:00.898232 1054224 command_runner.go:130] > kind: ConfigMap
	I0830 22:04:00.898236 1054224 command_runner.go:130] > metadata:
	I0830 22:04:00.898253 1054224 command_runner.go:130] >   creationTimestamp: "2023-08-30T22:03:47Z"
	I0830 22:04:00.898258 1054224 command_runner.go:130] >   name: coredns
	I0830 22:04:00.898264 1054224 command_runner.go:130] >   namespace: kube-system
	I0830 22:04:00.898272 1054224 command_runner.go:130] >   resourceVersion: "269"
	I0830 22:04:00.898277 1054224 command_runner.go:130] >   uid: b177992b-d818-4ed5-9ed0-6bb64d607813
	I0830 22:04:00.902393 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:04:00.902909 1054224 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:00.903211 1054224 kapi.go:59] client config for multinode-994875: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:04:00.903531 1054224 node_ready.go:35] waiting up to 6m0s for node "multinode-994875" to be "Ready" ...
	I0830 22:04:00.903615 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:00.903625 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.903634 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.903647 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.914154 1054224 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0830 22:04:00.914181 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.914191 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.914198 1054224 round_trippers.go:580]     Audit-Id: 86a6169d-682b-4765-8178-0b1978d74438
	I0830 22:04:00.914211 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.914218 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.914227 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.914234 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.915594 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:00.916392 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:00.916416 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:00.916426 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:00.916433 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:00.938946 1054224 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0830 22:04:00.938975 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:00.938984 1054224 round_trippers.go:580]     Audit-Id: aacab020-7025-40b6-ad47-a57631156314
	I0830 22:04:00.938991 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:00.939013 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:00.939025 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:00.939032 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:00.939049 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:00 GMT
	I0830 22:04:00.941721 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:01.013007 1054224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:04:01.016339 1054224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:04:01.442507 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:01.442529 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:01.442554 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:01.442561 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:01.482185 1054224 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0830 22:04:01.482240 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:01.482250 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:01.482258 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:01.482265 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:01 GMT
	I0830 22:04:01.482274 1054224 round_trippers.go:580]     Audit-Id: a990d8a1-006f-4e10-b162-a875c5c32f1d
	I0830 22:04:01.482286 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:01.482300 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:01.482836 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:01.527384 1054224 command_runner.go:130] > configmap/coredns replaced
	I0830 22:04:01.533729 1054224 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0830 22:04:01.541354 1054224 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0830 22:04:01.799357 1054224 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0830 22:04:01.807954 1054224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0830 22:04:01.818471 1054224 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 22:04:01.828752 1054224 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0830 22:04:01.839217 1054224 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0830 22:04:01.851823 1054224 command_runner.go:130] > pod/storage-provisioner created
	I0830 22:04:01.861087 1054224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0830 22:04:01.862655 1054224 addons.go:502] enable addons completed in 1.242211666s: enabled=[default-storageclass storage-provisioner]
	I0830 22:04:01.942352 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:01.942378 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:01.942388 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:01.942396 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:01.945087 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:01.945109 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:01.945118 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:01.945147 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:01.945155 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:01.945162 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:01 GMT
	I0830 22:04:01.945169 1054224 round_trippers.go:580]     Audit-Id: a1162788-7c93-4ac9-b750-afcc2b2a5b7b
	I0830 22:04:01.945175 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:01.945508 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:02.443114 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:02.443139 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:02.443149 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:02.443157 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:02.445848 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:02.445919 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:02.445940 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:02.445958 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:02.446016 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:02.446031 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:02 GMT
	I0830 22:04:02.446039 1054224 round_trippers.go:580]     Audit-Id: 6b3fb94c-88cc-47ac-bc7b-20158e7d7808
	I0830 22:04:02.446045 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:02.446269 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:02.942679 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:02.942701 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:02.942711 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:02.942719 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:02.945155 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:02.945225 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:02.945242 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:02.945249 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:02.945256 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:02.945263 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:02.945269 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:02 GMT
	I0830 22:04:02.945290 1054224 round_trippers.go:580]     Audit-Id: 35502992-21aa-4ed8-aae8-d0b2f55f7469
	I0830 22:04:02.945683 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:02.946117 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:03.442927 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:03.442953 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:03.442963 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:03.442971 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:03.446118 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:03.446198 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:03.446216 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:03.446227 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:03.446234 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:03.446241 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:03 GMT
	I0830 22:04:03.446248 1054224 round_trippers.go:580]     Audit-Id: 77f845bc-b0fa-4cf9-8c74-1e57885606d1
	I0830 22:04:03.446266 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:03.446426 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:03.942381 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:03.942402 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:03.942413 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:03.942445 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:03.945021 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:03.945060 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:03.945070 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:03.945077 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:03.945087 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:03.945097 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:03.945110 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:03 GMT
	I0830 22:04:03.945119 1054224 round_trippers.go:580]     Audit-Id: 45da5481-fafb-43a2-bbbe-b64f20946d5e
	I0830 22:04:03.945342 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:04.443023 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:04.443048 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:04.443058 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:04.443066 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:04.445736 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:04.445775 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:04.445784 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:04.445791 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:04.445813 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:04.445821 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:04 GMT
	I0830 22:04:04.445828 1054224 round_trippers.go:580]     Audit-Id: af50516a-300a-4948-bdbc-e244ef9bcfb5
	I0830 22:04:04.445839 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:04.446286 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:04.942450 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:04.942480 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:04.942491 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:04.942498 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:04.945253 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:04.945280 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:04.945289 1054224 round_trippers.go:580]     Audit-Id: 65ccb4fe-6ed8-4096-ba07-8466d65cc4ac
	I0830 22:04:04.945296 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:04.945303 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:04.945309 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:04.945316 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:04.945323 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:04 GMT
	I0830 22:04:04.945488 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:05.442616 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:05.442640 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:05.442649 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:05.442657 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:05.445150 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:05.445176 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:05.445185 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:05.445192 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:05 GMT
	I0830 22:04:05.445199 1054224 round_trippers.go:580]     Audit-Id: 1e39aa4f-638f-499b-9fdc-32b7ec82660b
	I0830 22:04:05.445206 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:05.445212 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:05.445223 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:05.445446 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:05.445852 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:05.943121 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:05.943146 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:05.943156 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:05.943163 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:05.945716 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:05.945745 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:05.945754 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:05 GMT
	I0830 22:04:05.945761 1054224 round_trippers.go:580]     Audit-Id: 4d170149-828a-424e-a181-7c8f354953ae
	I0830 22:04:05.945767 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:05.945774 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:05.945781 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:05.945793 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:05.945930 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:06.443188 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:06.443219 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:06.443229 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:06.443237 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:06.445859 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:06.445881 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:06.445890 1054224 round_trippers.go:580]     Audit-Id: a9b2518f-57eb-44db-8fd5-05de27a0580f
	I0830 22:04:06.445896 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:06.445903 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:06.445910 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:06.445917 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:06.445924 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:06 GMT
	I0830 22:04:06.446017 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:06.942604 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:06.942625 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:06.942635 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:06.942644 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:06.945215 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:06.945240 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:06.945249 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:06.945256 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:06.945263 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:06 GMT
	I0830 22:04:06.945269 1054224 round_trippers.go:580]     Audit-Id: 9fe9a12a-81d6-4c4c-8d8d-234e43b79bc2
	I0830 22:04:06.945276 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:06.945282 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:06.945763 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:07.442470 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:07.442495 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:07.442505 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:07.442513 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:07.445103 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:07.445160 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:07.445170 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:07.445177 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:07.445184 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:07 GMT
	I0830 22:04:07.445191 1054224 round_trippers.go:580]     Audit-Id: 51172494-efd6-4b70-83f1-656a1f7a081a
	I0830 22:04:07.445198 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:07.445205 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:07.445727 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:07.446131 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:07.942381 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:07.942405 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:07.942416 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:07.942424 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:07.945207 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:07.945229 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:07.945238 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:07.945245 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:07 GMT
	I0830 22:04:07.945251 1054224 round_trippers.go:580]     Audit-Id: dd7154a9-60a2-416b-b02c-63f0ab03c085
	I0830 22:04:07.945258 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:07.945265 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:07.945271 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:07.945422 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:08.443044 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:08.443064 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:08.443074 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:08.443081 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:08.445864 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:08.445888 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:08.445898 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:08.445905 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:08 GMT
	I0830 22:04:08.445914 1054224 round_trippers.go:580]     Audit-Id: c70a1e89-eb6e-4069-b548-2eb539fc6429
	I0830 22:04:08.445921 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:08.445928 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:08.445934 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:08.446249 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:08.943279 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:08.943308 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:08.943321 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:08.943329 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:08.946007 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:08.946040 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:08.946051 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:08 GMT
	I0830 22:04:08.946059 1054224 round_trippers.go:580]     Audit-Id: 4ff87e7d-f50f-45e5-96d5-5fd00ee3fc87
	I0830 22:04:08.946065 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:08.946072 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:08.946078 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:08.946085 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:08.946206 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:09.442441 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:09.442466 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:09.442477 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:09.442484 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:09.445111 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:09.445157 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:09.445167 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:09.445174 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:09.445181 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:09.445188 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:09 GMT
	I0830 22:04:09.445195 1054224 round_trippers.go:580]     Audit-Id: ebcd980a-2efb-4b11-ac68-adf686e18bb5
	I0830 22:04:09.445202 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:09.445316 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:09.942385 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:09.942408 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:09.942418 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:09.942425 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:09.944946 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:09.944973 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:09.944982 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:09.944989 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:09.944998 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:09 GMT
	I0830 22:04:09.945005 1054224 round_trippers.go:580]     Audit-Id: 1ed14b18-33ea-4505-ab1c-300a40e599d2
	I0830 22:04:09.945015 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:09.945024 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:09.945164 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:09.945556 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:10.443279 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:10.443305 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:10.443318 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:10.443334 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:10.446080 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:10.446102 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:10.446110 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:10.446117 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:10.446124 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:10.446131 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:10.446137 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:10 GMT
	I0830 22:04:10.446145 1054224 round_trippers.go:580]     Audit-Id: cb4034ff-35c3-4739-9a56-1428b0bf3d6e
	I0830 22:04:10.446307 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:10.943087 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:10.943109 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:10.943119 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:10.943127 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:10.945635 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:10.945656 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:10.945665 1054224 round_trippers.go:580]     Audit-Id: 6ff33192-2036-4233-838a-005518ded68d
	I0830 22:04:10.945672 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:10.945686 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:10.945696 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:10.945705 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:10.945712 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:10 GMT
	I0830 22:04:10.946001 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:11.443173 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:11.443198 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:11.443209 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:11.443216 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:11.445751 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:11.445782 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:11.445792 1054224 round_trippers.go:580]     Audit-Id: e73753c0-913e-4feb-8801-e5a6dfa14c83
	I0830 22:04:11.445802 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:11.445809 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:11.445816 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:11.445822 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:11.445833 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:11 GMT
	I0830 22:04:11.445968 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:11.943223 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:11.943249 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:11.943260 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:11.943267 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:11.945896 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:11.945917 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:11.945928 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:11.945935 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:11.945941 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:11.945948 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:11.945955 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:11 GMT
	I0830 22:04:11.945961 1054224 round_trippers.go:580]     Audit-Id: fbfba796-afa9-4f37-afa6-693cfe424aa9
	I0830 22:04:11.946094 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:11.946492 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:12.443297 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:12.443322 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:12.443332 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:12.443340 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:12.445828 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:12.445852 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:12.445862 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:12.445869 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:12.445876 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:12.445882 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:12.445889 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:12 GMT
	I0830 22:04:12.445899 1054224 round_trippers.go:580]     Audit-Id: 25704538-6bb2-44a0-9d2f-82aa5008b78e
	I0830 22:04:12.446003 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:12.943119 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:12.943141 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:12.943151 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:12.943158 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:12.945743 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:12.945769 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:12.945778 1054224 round_trippers.go:580]     Audit-Id: c6a0e51a-30df-426e-9007-ddc8546b113e
	I0830 22:04:12.945785 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:12.945791 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:12.945798 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:12.945805 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:12.945812 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:12 GMT
	I0830 22:04:12.945945 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:13.443078 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:13.443102 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:13.443112 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:13.443120 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:13.445709 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:13.445732 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:13.445741 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:13.445748 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:13.445754 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:13.445763 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:13.445773 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:13 GMT
	I0830 22:04:13.445780 1054224 round_trippers.go:580]     Audit-Id: 662251c4-93ff-412b-93cd-93ce751e6ff4
	I0830 22:04:13.446118 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:13.942400 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:13.942430 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:13.942441 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:13.942463 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:13.945045 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:13.945067 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:13.945076 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:13.945083 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:13.945090 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:13.945097 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:13 GMT
	I0830 22:04:13.945104 1054224 round_trippers.go:580]     Audit-Id: 4fcb1827-62d4-4468-99bc-d5364d03830e
	I0830 22:04:13.945111 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:13.945446 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:14.442540 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:14.442563 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:14.442574 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:14.442581 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:14.445181 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:14.445204 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:14.445213 1054224 round_trippers.go:580]     Audit-Id: 01ecf44c-169d-42a9-a092-b2c2e2a139b6
	I0830 22:04:14.445220 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:14.445226 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:14.445233 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:14.445239 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:14.445246 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:14 GMT
	I0830 22:04:14.445360 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:14.445757 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:14.943138 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:14.943165 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:14.943175 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:14.943183 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:14.945646 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:14.945672 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:14.945682 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:14.945689 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:14 GMT
	I0830 22:04:14.945741 1054224 round_trippers.go:580]     Audit-Id: 00f5d1f6-e005-4054-9a94-6780be8f1b72
	I0830 22:04:14.945754 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:14.945763 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:14.945774 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:14.945898 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:15.442640 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:15.442672 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:15.442683 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:15.442690 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:15.445410 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:15.445432 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:15.445441 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:15.445447 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:15.445454 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:15.445461 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:15 GMT
	I0830 22:04:15.445468 1054224 round_trippers.go:580]     Audit-Id: 39728017-fc87-4710-a6ac-363290faeacf
	I0830 22:04:15.445474 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:15.445583 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:15.942601 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:15.942624 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:15.942634 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:15.942641 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:15.945156 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:15.945182 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:15.945192 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:15.945199 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:15.945207 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:15.945213 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:15 GMT
	I0830 22:04:15.945226 1054224 round_trippers.go:580]     Audit-Id: 50c1e533-a887-4ce0-8e75-6160324040ea
	I0830 22:04:15.945233 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:15.945341 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:16.442472 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:16.442498 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:16.442509 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:16.442516 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:16.445209 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:16.445237 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:16.445247 1054224 round_trippers.go:580]     Audit-Id: 707414bf-9c08-4a0d-8fc4-578aaa744b20
	I0830 22:04:16.445254 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:16.445260 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:16.445267 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:16.445274 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:16.445281 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:16 GMT
	I0830 22:04:16.445397 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:16.445810 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:16.942912 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:16.942943 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:16.942953 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:16.942960 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:16.945831 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:16.945861 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:16.945870 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:16.945879 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:16.945886 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:16 GMT
	I0830 22:04:16.945892 1054224 round_trippers.go:580]     Audit-Id: bc40cd16-69d0-4185-84a3-1df1d0c4751d
	I0830 22:04:16.945899 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:16.945905 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:16.946019 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:17.443190 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:17.443212 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:17.443223 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:17.443230 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:17.445664 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:17.445685 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:17.445693 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:17.445700 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:17.445706 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:17.445713 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:17 GMT
	I0830 22:04:17.445720 1054224 round_trippers.go:580]     Audit-Id: 28e8e921-7977-4285-8e01-c422e3b9ab02
	I0830 22:04:17.445727 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:17.445834 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:17.942692 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:17.942718 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:17.942728 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:17.942736 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:17.945391 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:17.945418 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:17.945427 1054224 round_trippers.go:580]     Audit-Id: 67a35550-bf98-4c64-8870-e46b59882bd5
	I0830 22:04:17.945434 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:17.945441 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:17.945447 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:17.945454 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:17.945461 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:17 GMT
	I0830 22:04:17.945604 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:18.442594 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:18.442652 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:18.442662 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:18.442670 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:18.445476 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:18.445498 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:18.445507 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:18 GMT
	I0830 22:04:18.445515 1054224 round_trippers.go:580]     Audit-Id: 56607266-ce20-4a93-805e-cbd970748220
	I0830 22:04:18.445522 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:18.445528 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:18.445535 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:18.445541 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:18.445650 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:18.446042 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:18.942717 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:18.942740 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:18.942750 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:18.942757 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:18.945278 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:18.945302 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:18.945312 1054224 round_trippers.go:580]     Audit-Id: 62d2949f-3792-44ef-ade3-1a201797234b
	I0830 22:04:18.945318 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:18.945325 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:18.945333 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:18.945344 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:18.945351 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:18 GMT
	I0830 22:04:18.945568 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:19.442435 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:19.442460 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:19.442469 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:19.442477 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:19.444941 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:19.444967 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:19.444976 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:19 GMT
	I0830 22:04:19.444989 1054224 round_trippers.go:580]     Audit-Id: 454bfd5a-91d4-4346-87f0-f8e35dda010d
	I0830 22:04:19.444996 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:19.445005 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:19.445011 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:19.445018 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:19.445167 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:19.943145 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:19.943169 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:19.943179 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:19.943186 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:19.945715 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:19.945737 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:19.945746 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:19.945753 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:19.945760 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:19.945767 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:19.945774 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:19 GMT
	I0830 22:04:19.945780 1054224 round_trippers.go:580]     Audit-Id: 1be51a62-c1f1-4fd6-aa4b-e395080d6f70
	I0830 22:04:19.945923 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:20.442517 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:20.442537 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:20.442547 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:20.442554 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:20.445080 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:20.445105 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:20.445113 1054224 round_trippers.go:580]     Audit-Id: d5af84d4-7ae5-4a91-8e0d-cc255a1af890
	I0830 22:04:20.445120 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:20.445154 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:20.445161 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:20.445173 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:20.445185 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:20 GMT
	I0830 22:04:20.445290 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:20.942589 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:20.942615 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:20.942624 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:20.942633 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:20.945198 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:20.945227 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:20.945237 1054224 round_trippers.go:580]     Audit-Id: e0c729b4-8de3-48ab-bf3b-3a14dd3d8c11
	I0830 22:04:20.945244 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:20.945255 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:20.945263 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:20.945275 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:20.945283 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:20 GMT
	I0830 22:04:20.945667 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:20.946080 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:21.442971 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:21.442997 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:21.443007 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:21.443014 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:21.445688 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:21.445710 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:21.445719 1054224 round_trippers.go:580]     Audit-Id: a3b178ea-d255-4e16-a14e-e32fc073fe0a
	I0830 22:04:21.445726 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:21.445733 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:21.445740 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:21.445747 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:21.445753 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:21 GMT
	I0830 22:04:21.445923 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:21.942762 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:21.942785 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:21.942795 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:21.942802 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:21.945404 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:21.945424 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:21.945433 1054224 round_trippers.go:580]     Audit-Id: a831ac56-8188-42d4-8bb0-de141b9800ff
	I0830 22:04:21.945440 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:21.945447 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:21.945453 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:21.945460 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:21.945466 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:21 GMT
	I0830 22:04:21.945785 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:22.442422 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:22.442447 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:22.442457 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:22.442465 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:22.445274 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:22.445302 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:22.445312 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:22.445319 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:22.445325 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:22.445332 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:22.445339 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:22 GMT
	I0830 22:04:22.445349 1054224 round_trippers.go:580]     Audit-Id: 8ddb3806-53c0-4ef1-92cf-7f6667edab45
	I0830 22:04:22.445460 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:22.942417 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:22.942440 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:22.942451 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:22.942458 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:22.945164 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:22.945190 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:22.945200 1054224 round_trippers.go:580]     Audit-Id: d04acb58-4324-4f09-a02c-354079f738b9
	I0830 22:04:22.945207 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:22.945214 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:22.945221 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:22.945227 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:22.945235 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:22 GMT
	I0830 22:04:22.945376 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:23.442376 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:23.442397 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:23.442408 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:23.442415 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:23.444961 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:23.444989 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:23.444998 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:23.445005 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:23.445011 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:23.445019 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:23 GMT
	I0830 22:04:23.445026 1054224 round_trippers.go:580]     Audit-Id: b04c3cc2-b6c2-4fc0-b1f9-b63f5685351c
	I0830 22:04:23.445036 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:23.445186 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:23.445594 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:23.943366 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:23.943387 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:23.943397 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:23.943404 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:23.945941 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:23.945962 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:23.945971 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:23.945978 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:23 GMT
	I0830 22:04:23.945985 1054224 round_trippers.go:580]     Audit-Id: c6f5daa2-16ad-45d3-87bb-dcbd569d4444
	I0830 22:04:23.945992 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:23.946005 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:23.946015 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:23.946448 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:24.442421 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:24.442444 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:24.442454 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:24.442461 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:24.445160 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:24.445185 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:24.445194 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:24.445201 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:24 GMT
	I0830 22:04:24.445208 1054224 round_trippers.go:580]     Audit-Id: 2cb2f7f9-9b90-4c25-801e-a1777bcdd0e3
	I0830 22:04:24.445215 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:24.445222 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:24.445231 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:24.445429 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:24.942929 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:24.942963 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:24.942974 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:24.942982 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:24.945634 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:24.945660 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:24.945669 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:24.945676 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:24.945682 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:24.945689 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:24.945696 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:24 GMT
	I0830 22:04:24.945703 1054224 round_trippers.go:580]     Audit-Id: 6e8506c9-3e83-4c52-97be-be512605bb76
	I0830 22:04:24.946008 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:25.442817 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:25.442842 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:25.442852 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:25.442860 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:25.445432 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:25.445456 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:25.445466 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:25 GMT
	I0830 22:04:25.445473 1054224 round_trippers.go:580]     Audit-Id: b52d5a7d-38c5-490b-abed-5b1e71b90510
	I0830 22:04:25.445480 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:25.445486 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:25.445493 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:25.445499 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:25.445622 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:25.446019 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:25.942579 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:25.942606 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:25.942617 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:25.942625 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:25.945409 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:25.945437 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:25.945446 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:25.945453 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:25.945460 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:25.945467 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:25 GMT
	I0830 22:04:25.945474 1054224 round_trippers.go:580]     Audit-Id: ec82427c-e20e-4512-8443-c914910a1601
	I0830 22:04:25.945483 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:25.945621 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:26.442407 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:26.442428 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:26.442437 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:26.442445 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:26.444988 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:26.445014 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:26.445023 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:26 GMT
	I0830 22:04:26.445032 1054224 round_trippers.go:580]     Audit-Id: 1e8bb45a-dc72-445b-8421-bc8af6e8e37d
	I0830 22:04:26.445039 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:26.445045 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:26.445052 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:26.445062 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:26.445211 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:26.942727 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:26.942751 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:26.942761 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:26.942769 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:26.945383 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:26.945408 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:26.945418 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:26.945426 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:26 GMT
	I0830 22:04:26.945433 1054224 round_trippers.go:580]     Audit-Id: 6d264c4f-dca6-494e-b709-83fc7919af2e
	I0830 22:04:26.945439 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:26.945446 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:26.945452 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:26.945615 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:27.442504 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:27.442526 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:27.442536 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:27.442544 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:27.449277 1054224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0830 22:04:27.449302 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:27.449311 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:27 GMT
	I0830 22:04:27.449318 1054224 round_trippers.go:580]     Audit-Id: 53b32add-79c7-4e7c-94f0-5370f4725d8a
	I0830 22:04:27.449325 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:27.449332 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:27.449342 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:27.449349 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:27.449680 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:27.450097 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:27.942337 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:27.942361 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:27.942371 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:27.942379 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:27.944894 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:27.944916 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:27.944924 1054224 round_trippers.go:580]     Audit-Id: dddb9d86-90c1-4923-856e-23299dc75b95
	I0830 22:04:27.944931 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:27.944938 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:27.944944 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:27.944951 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:27.944958 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:27 GMT
	I0830 22:04:27.945167 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:28.443338 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:28.443361 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:28.443371 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:28.443378 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:28.446000 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:28.446027 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:28.446036 1054224 round_trippers.go:580]     Audit-Id: a3a10584-11d8-4ede-bd41-9f33b09b465e
	I0830 22:04:28.446043 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:28.446049 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:28.446056 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:28.446062 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:28.446071 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:28 GMT
	I0830 22:04:28.446318 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:28.943004 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:28.943029 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:28.943038 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:28.943046 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:28.945579 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:28.945600 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:28.945609 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:28 GMT
	I0830 22:04:28.945616 1054224 round_trippers.go:580]     Audit-Id: 2ab96c3c-92e6-4bec-bd14-17232fb4f8d1
	I0830 22:04:28.945625 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:28.945632 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:28.945639 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:28.945645 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:28.945909 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:29.442814 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:29.442833 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:29.442843 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:29.442850 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:29.445463 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:29.445485 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:29.445494 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:29.445501 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:29.445507 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:29.445514 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:29.445522 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:29 GMT
	I0830 22:04:29.445529 1054224 round_trippers.go:580]     Audit-Id: e5a57c64-341c-4187-a02a-4b519eb25932
	I0830 22:04:29.445678 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:29.942557 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:29.942581 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:29.942591 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:29.942598 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:29.945215 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:29.945236 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:29.945245 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:29 GMT
	I0830 22:04:29.945253 1054224 round_trippers.go:580]     Audit-Id: caa3d19a-0db9-43bf-9d24-ba3096efb54d
	I0830 22:04:29.945259 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:29.945266 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:29.945272 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:29.945279 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:29.945394 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:29.945781 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:30.442946 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:30.442969 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:30.442981 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:30.442989 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:30.446278 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:30.446302 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:30.446312 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:30.446319 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:30.446326 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:30.446333 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:30 GMT
	I0830 22:04:30.446345 1054224 round_trippers.go:580]     Audit-Id: 0101d91d-6c1f-4b27-b02e-3e52db1fb8d9
	I0830 22:04:30.446352 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:30.446749 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:30.942745 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:30.942768 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:30.942777 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:30.942785 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:30.945477 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:30.945504 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:30.945520 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:30.945528 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:30.945535 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:30 GMT
	I0830 22:04:30.945553 1054224 round_trippers.go:580]     Audit-Id: 682e57a3-d36f-4879-8bcd-c1ec774d3b5d
	I0830 22:04:30.945649 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:30.945661 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:30.945845 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:31.443110 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:31.443133 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:31.443142 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:31.443150 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:31.445851 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:31.445872 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:31.445881 1054224 round_trippers.go:580]     Audit-Id: 70327c50-9dfc-4160-aef0-6f57865bd314
	I0830 22:04:31.445888 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:31.445895 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:31.445901 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:31.445908 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:31.445914 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:31 GMT
	I0830 22:04:31.446032 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:31.942856 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:31.942878 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:31.942889 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:31.942897 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:31.945352 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:31.945389 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:31.945399 1054224 round_trippers.go:580]     Audit-Id: 96550b87-9a65-4dcd-8795-bb924def5703
	I0830 22:04:31.945406 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:31.945413 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:31.945422 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:31.945431 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:31.945443 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:31 GMT
	I0830 22:04:31.945769 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"346","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0830 22:04:31.946171 1054224 node_ready.go:58] node "multinode-994875" has status "Ready":"False"
	I0830 22:04:32.442399 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:32.442429 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.442439 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.442447 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.445430 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.445458 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.445468 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.445476 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.445482 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.445489 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.445496 1054224 round_trippers.go:580]     Audit-Id: e34d872e-97fa-4f58-b32b-39532fc94f2a
	I0830 22:04:32.445503 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.446056 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:32.446457 1054224 node_ready.go:49] node "multinode-994875" has status "Ready":"True"
	I0830 22:04:32.446475 1054224 node_ready.go:38] duration metric: took 31.542925357s waiting for node "multinode-994875" to be "Ready" ...
	I0830 22:04:32.446485 1054224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:04:32.446590 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:04:32.446599 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.446607 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.446616 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.450625 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:32.450652 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.450665 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.450674 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.450681 1054224 round_trippers.go:580]     Audit-Id: f6091b5f-a387-4f2c-8a45-ec275c8d0a8f
	I0830 22:04:32.450688 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.450701 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.450707 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.452179 1054224 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"442","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0830 22:04:32.456306 1054224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:32.456408 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:04:32.456422 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.456432 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.456442 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.459406 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.459431 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.459441 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.459448 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.459458 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.459465 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.459472 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.459479 1054224 round_trippers.go:580]     Audit-Id: 31e8f2b4-acc5-4c61-b18e-9aa277be1030
	I0830 22:04:32.459992 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"442","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0830 22:04:32.460590 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:32.460617 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.460627 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.460635 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.463507 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.463541 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.463550 1054224 round_trippers.go:580]     Audit-Id: 485d582e-86ee-40ec-a2e8-0546bab97515
	I0830 22:04:32.463558 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.463566 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.463572 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.463580 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.463586 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.463797 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:32.464243 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:04:32.464257 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.464266 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.464273 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.466918 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.466943 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.466956 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.466964 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.466971 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.466978 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.466989 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.466999 1054224 round_trippers.go:580]     Audit-Id: 9b1d64e8-db00-4361-9214-5b68d08a85bc
	I0830 22:04:32.467363 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"442","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0830 22:04:32.467930 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:32.467948 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.467957 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.467981 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.470444 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.470511 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.470533 1054224 round_trippers.go:580]     Audit-Id: ed73a8ca-4a68-4bbe-8770-23a8c9e32c0b
	I0830 22:04:32.470555 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.470588 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.470603 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.470610 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.470617 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.470745 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:32.971882 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:04:32.971903 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.971913 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.971921 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.974494 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.974561 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.974583 1054224 round_trippers.go:580]     Audit-Id: a402fcf8-d212-46ae-a406-baa449802043
	I0830 22:04:32.974602 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.974633 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.974658 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.974670 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.974677 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.974791 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"442","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0830 22:04:32.975313 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:32.975329 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:32.975338 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:32.975345 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:32.977661 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:32.977722 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:32.977745 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:32 GMT
	I0830 22:04:32.977764 1054224 round_trippers.go:580]     Audit-Id: e626fc61-3c4f-496c-961a-a2e0f5433d06
	I0830 22:04:32.977797 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:32.977816 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:32.977830 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:32.977837 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:32.977961 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:33.471421 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:04:33.471446 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.471457 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.471465 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.474206 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:33.474233 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.474242 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.474249 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.474256 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.474270 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.474279 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.474291 1054224 round_trippers.go:580]     Audit-Id: c1bd927c-be34-4da2-8040-d8bb791babbb
	I0830 22:04:33.474402 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"442","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0830 22:04:33.474975 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:33.474990 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.474999 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.475006 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.477357 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:33.477423 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.477444 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.477464 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.477494 1054224 round_trippers.go:580]     Audit-Id: c3e825e7-dfae-4088-bd6e-8a32aff61f62
	I0830 22:04:33.477510 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.477517 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.477524 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.477704 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:33.971738 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:04:33.971764 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.971775 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.971783 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.974434 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:33.974523 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.974539 1054224 round_trippers.go:580]     Audit-Id: 7034ac67-0bc3-402d-bf4a-f1381fd9b2e9
	I0830 22:04:33.974546 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.974561 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.974568 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.974593 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.974601 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.974727 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"453","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0830 22:04:33.975300 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:33.975316 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.975324 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.975332 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.977895 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:33.977915 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.977924 1054224 round_trippers.go:580]     Audit-Id: 9c1cb267-f49c-40a7-ae24-9f3c87a9945b
	I0830 22:04:33.977931 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.977937 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.977943 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.977950 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.977957 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.978093 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:33.978493 1054224 pod_ready.go:92] pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:33.978513 1054224 pod_ready.go:81] duration metric: took 1.522171925s waiting for pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:33.978525 1054224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:33.978590 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-994875
	I0830 22:04:33.978600 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.978608 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.978615 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.985659 1054224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0830 22:04:33.985681 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.985695 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.985702 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.985709 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.985716 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.985722 1054224 round_trippers.go:580]     Audit-Id: d20aa1a7-ce0e-4b80-a086-4d7e8a3e2b9e
	I0830 22:04:33.985729 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.986217 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-994875","namespace":"kube-system","uid":"3a724c5d-4cbe-4740-a64d-883f1859d257","resourceVersion":"427","creationTimestamp":"2023-08-30T22:03:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6839f74d8b9c4802e53664200913f5de","kubernetes.io/config.mirror":"6839f74d8b9c4802e53664200913f5de","kubernetes.io/config.seen":"2023-08-30T22:03:39.081064328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0830 22:04:33.986716 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:33.986734 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.986744 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.986754 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.989165 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:33.989184 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.989192 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.989200 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.989206 1054224 round_trippers.go:580]     Audit-Id: 7a2ec3d3-946a-4d33-956b-e8abf8703525
	I0830 22:04:33.989212 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.989219 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.989225 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.989462 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:33.989850 1054224 pod_ready.go:92] pod "etcd-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:33.989867 1054224 pod_ready.go:81] duration metric: took 11.331811ms waiting for pod "etcd-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:33.989880 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:33.989945 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-994875
	I0830 22:04:33.989954 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.989962 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.989969 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:33.998518 1054224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0830 22:04:33.998543 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:33.998552 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:33.998559 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:33 GMT
	I0830 22:04:33.998565 1054224 round_trippers.go:580]     Audit-Id: 352d5fde-d504-43fb-af7e-bd5cf65248f3
	I0830 22:04:33.998572 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:33.998579 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:33.998585 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:33.998936 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-994875","namespace":"kube-system","uid":"1a58b1f6-9e2b-438e-b54b-d23f7804e728","resourceVersion":"424","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ffe9011f46196e6529dac903c5aa8d04","kubernetes.io/config.mirror":"ffe9011f46196e6529dac903c5aa8d04","kubernetes.io/config.seen":"2023-08-30T22:03:47.448536926Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0830 22:04:33.999482 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:33.999496 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:33.999505 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:33.999513 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.001757 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.001775 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.001783 1054224 round_trippers.go:580]     Audit-Id: 58a4d563-2e37-4149-b9f4-04c4e42af492
	I0830 22:04:34.001791 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.001798 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.001806 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.001822 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.001829 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.002226 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:34.002618 1054224 pod_ready.go:92] pod "kube-apiserver-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:34.002634 1054224 pod_ready.go:81] duration metric: took 12.742584ms waiting for pod "kube-apiserver-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.002692 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.002753 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-994875
	I0830 22:04:34.002763 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.002771 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.002778 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.004749 1054224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 22:04:34.004765 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.004773 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.004780 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.004787 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.004797 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.004809 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.004816 1054224 round_trippers.go:580]     Audit-Id: 28dfd9a8-19b9-4a8e-a9a8-48b40f39a9b2
	I0830 22:04:34.004958 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-994875","namespace":"kube-system","uid":"ad217ba1-265c-477a-985c-9d9c21b976b8","resourceVersion":"425","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb61cdf64af05dc324b859049e23cf5","kubernetes.io/config.mirror":"fdb61cdf64af05dc324b859049e23cf5","kubernetes.io/config.seen":"2023-08-30T22:03:47.448538353Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0830 22:04:34.042760 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:34.042789 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.042800 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.042807 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.045600 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.045668 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.045689 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.045710 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.045744 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.045767 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.045785 1054224 round_trippers.go:580]     Audit-Id: 0bf8f025-2654-45f0-88e4-eb19cff16241
	I0830 22:04:34.045805 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.046021 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:34.046474 1054224 pod_ready.go:92] pod "kube-controller-manager-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:34.046495 1054224 pod_ready.go:81] duration metric: took 43.793915ms waiting for pod "kube-controller-manager-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.046510 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dn6c5" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.242956 1054224 request.go:629] Waited for 196.36178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn6c5
	I0830 22:04:34.243029 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn6c5
	I0830 22:04:34.243037 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.243046 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.243053 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.245610 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.245634 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.245643 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.245649 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.245656 1054224 round_trippers.go:580]     Audit-Id: ad9ca983-033b-4db7-ae58-99f8063b8fc6
	I0830 22:04:34.245662 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.245669 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.245678 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.245804 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dn6c5","generateName":"kube-proxy-","namespace":"kube-system","uid":"1ca7b9ca-0dca-404a-a450-5c05dee3e137","resourceVersion":"409","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aa74bb31-0475-4a10-acfb-8825232ed9aa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aa74bb31-0475-4a10-acfb-8825232ed9aa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0830 22:04:34.442566 1054224 request.go:629] Waited for 196.260865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:34.442625 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:34.442634 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.442649 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.442659 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.445204 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.445229 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.445239 1054224 round_trippers.go:580]     Audit-Id: d3a19a95-0106-46a4-8622-30bef111ff0f
	I0830 22:04:34.445246 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.445252 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.445259 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.445267 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.445276 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.445484 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:34.445883 1054224 pod_ready.go:92] pod "kube-proxy-dn6c5" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:34.445905 1054224 pod_ready.go:81] duration metric: took 399.386823ms waiting for pod "kube-proxy-dn6c5" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.445917 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.643285 1054224 request.go:629] Waited for 197.299896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994875
	I0830 22:04:34.643348 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994875
	I0830 22:04:34.643359 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.643368 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.643384 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.645991 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.646021 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.646031 1054224 round_trippers.go:580]     Audit-Id: 23ff54ff-c9b2-45da-948e-1407f73d1c76
	I0830 22:04:34.646046 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.646053 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.646061 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.646067 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.646078 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.646208 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-994875","namespace":"kube-system","uid":"baf097b2-79dd-4619-b805-5dcf6403427a","resourceVersion":"426","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c56505fe72053c740eb16de681f4dc4","kubernetes.io/config.mirror":"0c56505fe72053c740eb16de681f4dc4","kubernetes.io/config.seen":"2023-08-30T22:03:47.448539174Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0830 22:04:34.843002 1054224 request.go:629] Waited for 196.333046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:34.843069 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:04:34.843082 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.843091 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.843098 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.845632 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:34.845657 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.845666 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.845674 1054224 round_trippers.go:580]     Audit-Id: 23793eca-aae7-47e9-a1e8-1550a49d1cd9
	I0830 22:04:34.845685 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.845692 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.845699 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.845710 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.845813 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:04:34.846215 1054224 pod_ready.go:92] pod "kube-scheduler-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:04:34.846237 1054224 pod_ready.go:81] duration metric: took 400.308421ms waiting for pod "kube-scheduler-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:04:34.846253 1054224 pod_ready.go:38] duration metric: took 2.399740155s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:04:34.846269 1054224 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:04:34.846328 1054224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:04:34.858045 1054224 command_runner.go:130] > 1217
	I0830 22:04:34.859277 1054224 api_server.go:72] duration metric: took 34.05805191s to wait for apiserver process to appear ...
	I0830 22:04:34.859297 1054224 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:04:34.859317 1054224 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0830 22:04:34.868301 1054224 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0830 22:04:34.868381 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0830 22:04:34.868393 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:34.868403 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:34.868413 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:34.869694 1054224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0830 22:04:34.869715 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:34.869724 1054224 round_trippers.go:580]     Audit-Id: 0d0b91e4-b25c-4331-afce-e4aa4eb6d112
	I0830 22:04:34.869731 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:34.869737 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:34.869744 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:34.869755 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:34.869761 1054224 round_trippers.go:580]     Content-Length: 263
	I0830 22:04:34.869773 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:34 GMT
	I0830 22:04:34.869795 1054224 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0830 22:04:34.869880 1054224 api_server.go:141] control plane version: v1.28.1
	I0830 22:04:34.869898 1054224 api_server.go:131] duration metric: took 10.594976ms to wait for apiserver health ...
	I0830 22:04:34.869906 1054224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:04:35.043186 1054224 request.go:629] Waited for 173.211043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:04:35.043311 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:04:35.043352 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:35.043362 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:35.043371 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:35.047831 1054224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 22:04:35.047860 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:35.047881 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:35.047889 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:35.047896 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:35.047904 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:35 GMT
	I0830 22:04:35.047915 1054224 round_trippers.go:580]     Audit-Id: 00f10eb3-0de6-4dcd-888e-0e27ad2d2d07
	I0830 22:04:35.047921 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:35.048795 1054224 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"458"},"items":[{"metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"453","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0830 22:04:35.051298 1054224 system_pods.go:59] 8 kube-system pods found
	I0830 22:04:35.051350 1054224 system_pods.go:61] "coredns-5dd5756b68-24ps6" [a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117] Running
	I0830 22:04:35.051358 1054224 system_pods.go:61] "etcd-multinode-994875" [3a724c5d-4cbe-4740-a64d-883f1859d257] Running
	I0830 22:04:35.051371 1054224 system_pods.go:61] "kindnet-gdfw4" [375a0b4c-8f52-4769-83d0-7b723290fac2] Running
	I0830 22:04:35.051386 1054224 system_pods.go:61] "kube-apiserver-multinode-994875" [1a58b1f6-9e2b-438e-b54b-d23f7804e728] Running
	I0830 22:04:35.051393 1054224 system_pods.go:61] "kube-controller-manager-multinode-994875" [ad217ba1-265c-477a-985c-9d9c21b976b8] Running
	I0830 22:04:35.051398 1054224 system_pods.go:61] "kube-proxy-dn6c5" [1ca7b9ca-0dca-404a-a450-5c05dee3e137] Running
	I0830 22:04:35.051405 1054224 system_pods.go:61] "kube-scheduler-multinode-994875" [baf097b2-79dd-4619-b805-5dcf6403427a] Running
	I0830 22:04:35.051413 1054224 system_pods.go:61] "storage-provisioner" [69e4c211-d1a6-408e-b03c-6a194165f888] Running
	I0830 22:04:35.051420 1054224 system_pods.go:74] duration metric: took 181.508897ms to wait for pod list to return data ...
	I0830 22:04:35.051431 1054224 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:04:35.242873 1054224 request.go:629] Waited for 191.319545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0830 22:04:35.242967 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0830 22:04:35.242979 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:35.243010 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:35.243026 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:35.245766 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:35.245795 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:35.245815 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:35.245824 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:35.245831 1054224 round_trippers.go:580]     Content-Length: 261
	I0830 22:04:35.245839 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:35 GMT
	I0830 22:04:35.245849 1054224 round_trippers.go:580]     Audit-Id: b5e4d969-30f2-4bd4-870e-21455636d2d3
	I0830 22:04:35.245856 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:35.245866 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:35.245900 1054224 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8cbf250d-6c93-4da8-b73e-04ae1352e376","resourceVersion":"349","creationTimestamp":"2023-08-30T22:04:00Z"}}]}
	I0830 22:04:35.246140 1054224 default_sa.go:45] found service account: "default"
	I0830 22:04:35.246157 1054224 default_sa.go:55] duration metric: took 194.720502ms for default service account to be created ...
	I0830 22:04:35.246166 1054224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:04:35.442476 1054224 request.go:629] Waited for 196.247401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:04:35.442535 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:04:35.442544 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:35.442555 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:35.442565 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:35.446132 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:35.446161 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:35.446170 1054224 round_trippers.go:580]     Audit-Id: b8e35cf0-8a48-4db4-8999-8a24054b2575
	I0830 22:04:35.446178 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:35.446184 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:35.446191 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:35.446201 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:35.446208 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:35 GMT
	I0830 22:04:35.446708 1054224 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"453","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0830 22:04:35.449101 1054224 system_pods.go:86] 8 kube-system pods found
	I0830 22:04:35.449147 1054224 system_pods.go:89] "coredns-5dd5756b68-24ps6" [a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117] Running
	I0830 22:04:35.449156 1054224 system_pods.go:89] "etcd-multinode-994875" [3a724c5d-4cbe-4740-a64d-883f1859d257] Running
	I0830 22:04:35.449163 1054224 system_pods.go:89] "kindnet-gdfw4" [375a0b4c-8f52-4769-83d0-7b723290fac2] Running
	I0830 22:04:35.449172 1054224 system_pods.go:89] "kube-apiserver-multinode-994875" [1a58b1f6-9e2b-438e-b54b-d23f7804e728] Running
	I0830 22:04:35.449180 1054224 system_pods.go:89] "kube-controller-manager-multinode-994875" [ad217ba1-265c-477a-985c-9d9c21b976b8] Running
	I0830 22:04:35.449184 1054224 system_pods.go:89] "kube-proxy-dn6c5" [1ca7b9ca-0dca-404a-a450-5c05dee3e137] Running
	I0830 22:04:35.449192 1054224 system_pods.go:89] "kube-scheduler-multinode-994875" [baf097b2-79dd-4619-b805-5dcf6403427a] Running
	I0830 22:04:35.449197 1054224 system_pods.go:89] "storage-provisioner" [69e4c211-d1a6-408e-b03c-6a194165f888] Running
	I0830 22:04:35.449204 1054224 system_pods.go:126] duration metric: took 203.032238ms to wait for k8s-apps to be running ...
	I0830 22:04:35.449217 1054224 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:04:35.449276 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:04:35.463175 1054224 system_svc.go:56] duration metric: took 13.948096ms WaitForService to wait for kubelet.
	I0830 22:04:35.463209 1054224 kubeadm.go:581] duration metric: took 34.661983896s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:04:35.463229 1054224 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:04:35.642476 1054224 request.go:629] Waited for 179.17788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0830 22:04:35.642541 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0830 22:04:35.642547 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:35.642556 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:35.642567 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:35.645185 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:35.645209 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:35.645224 1054224 round_trippers.go:580]     Audit-Id: 03eb63ae-2eb5-4d5b-8a35-309cbcd688b6
	I0830 22:04:35.645231 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:35.645238 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:35.645245 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:35.645251 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:35.645258 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:35 GMT
	I0830 22:04:35.645349 1054224 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0830 22:04:35.645819 1054224 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 22:04:35.645843 1054224 node_conditions.go:123] node cpu capacity is 2
	I0830 22:04:35.645855 1054224 node_conditions.go:105] duration metric: took 182.622012ms to run NodePressure ...
	I0830 22:04:35.645865 1054224 start.go:228] waiting for startup goroutines ...
	I0830 22:04:35.645872 1054224 start.go:233] waiting for cluster config update ...
	I0830 22:04:35.645885 1054224 start.go:242] writing updated cluster config ...
	I0830 22:04:35.649248 1054224 out.go:177] 
	I0830 22:04:35.651718 1054224 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:04:35.651845 1054224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json ...
	I0830 22:04:35.654486 1054224 out.go:177] * Starting worker node multinode-994875-m02 in cluster multinode-994875
	I0830 22:04:35.656580 1054224 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 22:04:35.658745 1054224 out.go:177] * Pulling base image ...
	I0830 22:04:35.661492 1054224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:04:35.661526 1054224 cache.go:57] Caching tarball of preloaded images
	I0830 22:04:35.661555 1054224 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 22:04:35.661825 1054224 preload.go:174] Found /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0830 22:04:35.661842 1054224 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0830 22:04:35.661989 1054224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json ...
	I0830 22:04:35.680999 1054224 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon, skipping pull
	I0830 22:04:35.681029 1054224 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in daemon, skipping load
	I0830 22:04:35.681051 1054224 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:04:35.681082 1054224 start.go:365] acquiring machines lock for multinode-994875-m02: {Name:mk21eb039a4e2e19742cdbbaaae8fef317c05ade Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:04:35.681229 1054224 start.go:369] acquired machines lock for "multinode-994875-m02" in 123.003µs
	I0830 22:04:35.681264 1054224 start.go:93] Provisioning new machine with config: &{Name:multinode-994875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 22:04:35.681360 1054224 start.go:125] createHost starting for "m02" (driver="docker")
	I0830 22:04:35.684364 1054224 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0830 22:04:35.684494 1054224 start.go:159] libmachine.API.Create for "multinode-994875" (driver="docker")
	I0830 22:04:35.684527 1054224 client.go:168] LocalClient.Create starting
	I0830 22:04:35.684606 1054224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem
	I0830 22:04:35.684647 1054224 main.go:141] libmachine: Decoding PEM data...
	I0830 22:04:35.684667 1054224 main.go:141] libmachine: Parsing certificate...
	I0830 22:04:35.684730 1054224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem
	I0830 22:04:35.684754 1054224 main.go:141] libmachine: Decoding PEM data...
	I0830 22:04:35.684765 1054224 main.go:141] libmachine: Parsing certificate...
	I0830 22:04:35.685054 1054224 cli_runner.go:164] Run: docker network inspect multinode-994875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:04:35.703836 1054224 network_create.go:76] Found existing network {name:multinode-994875 subnet:0x4000a4c4e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0830 22:04:35.703879 1054224 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-994875-m02" container
	I0830 22:04:35.703955 1054224 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 22:04:35.721800 1054224 cli_runner.go:164] Run: docker volume create multinode-994875-m02 --label name.minikube.sigs.k8s.io=multinode-994875-m02 --label created_by.minikube.sigs.k8s.io=true
	I0830 22:04:35.740096 1054224 oci.go:103] Successfully created a docker volume multinode-994875-m02
	I0830 22:04:35.740191 1054224 cli_runner.go:164] Run: docker run --rm --name multinode-994875-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994875-m02 --entrypoint /usr/bin/test -v multinode-994875-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -d /var/lib
	I0830 22:04:36.323774 1054224 oci.go:107] Successfully prepared a docker volume multinode-994875-m02
	I0830 22:04:36.323811 1054224 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 22:04:36.323831 1054224 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 22:04:36.323912 1054224 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994875-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 22:04:40.608947 1054224 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994875-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad -I lz4 -xf /preloaded.tar -C /extractDir: (4.284994138s)
	I0830 22:04:40.608980 1054224 kic.go:199] duration metric: took 4.285144 seconds to extract preloaded images to volume
	W0830 22:04:40.609117 1054224 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 22:04:40.609247 1054224 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 22:04:40.686810 1054224 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-994875-m02 --name multinode-994875-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994875-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-994875-m02 --network multinode-994875 --ip 192.168.58.3 --volume multinode-994875-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad
	I0830 22:04:41.054494 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Running}}
	I0830 22:04:41.088891 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Status}}
	I0830 22:04:41.116036 1054224 cli_runner.go:164] Run: docker exec multinode-994875-m02 stat /var/lib/dpkg/alternatives/iptables
	I0830 22:04:41.216189 1054224 oci.go:144] the created container "multinode-994875-m02" has a running status.
	I0830 22:04:41.216219 1054224 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa...
	I0830 22:04:41.521857 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0830 22:04:41.522013 1054224 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 22:04:41.555667 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Status}}
	I0830 22:04:41.587723 1054224 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 22:04:41.587743 1054224 kic_runner.go:114] Args: [docker exec --privileged multinode-994875-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 22:04:41.728392 1054224 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Status}}
	I0830 22:04:41.757826 1054224 machine.go:88] provisioning docker machine ...
	I0830 22:04:41.757856 1054224 ubuntu.go:169] provisioning hostname "multinode-994875-m02"
	I0830 22:04:41.757927 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:41.786885 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:04:41.787339 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34093 <nil> <nil>}
	I0830 22:04:41.787357 1054224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994875-m02 && echo "multinode-994875-m02" | sudo tee /etc/hostname
	I0830 22:04:41.788861 1054224 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0830 22:04:44.946695 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994875-m02
	
	I0830 22:04:44.946855 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:44.966631 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:04:44.967060 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34093 <nil> <nil>}
	I0830 22:04:44.967087 1054224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994875-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994875-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994875-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:04:45.128115 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:04:45.128146 1054224 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:04:45.128172 1054224 ubuntu.go:177] setting up certificates
	I0830 22:04:45.128183 1054224 provision.go:83] configureAuth start
	I0830 22:04:45.128255 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875-m02
	I0830 22:04:45.153581 1054224 provision.go:138] copyHostCerts
	I0830 22:04:45.153632 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:04:45.153674 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:04:45.153687 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:04:45.153897 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:04:45.154021 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:04:45.154044 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:04:45.154050 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:04:45.154084 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:04:45.154130 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:04:45.154146 1054224 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:04:45.154150 1054224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:04:45.154173 1054224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:04:45.154224 1054224 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.multinode-994875-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-994875-m02]
	I0830 22:04:46.146668 1054224 provision.go:172] copyRemoteCerts
	I0830 22:04:46.146742 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:04:46.146790 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:46.167257 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:04:46.268447 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 22:04:46.268507 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:04:46.299415 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 22:04:46.299492 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0830 22:04:46.332733 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 22:04:46.332800 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:04:46.364783 1054224 provision.go:86] duration metric: configureAuth took 1.236581991s
	I0830 22:04:46.364808 1054224 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:04:46.365016 1054224 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:04:46.365168 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:46.384112 1054224 main.go:141] libmachine: Using SSH client type: native
	I0830 22:04:46.384554 1054224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34093 <nil> <nil>}
	I0830 22:04:46.384580 1054224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:04:46.650624 1054224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:04:46.650650 1054224 machine.go:91] provisioned docker machine in 4.892804875s
	I0830 22:04:46.650660 1054224 client.go:171] LocalClient.Create took 10.966124622s
	I0830 22:04:46.650672 1054224 start.go:167] duration metric: libmachine.API.Create for "multinode-994875" took 10.96617949s
	I0830 22:04:46.650680 1054224 start.go:300] post-start starting for "multinode-994875-m02" (driver="docker")
	I0830 22:04:46.650689 1054224 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:04:46.650760 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:04:46.650807 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:46.669474 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:04:46.776453 1054224 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:04:46.780639 1054224 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0830 22:04:46.780657 1054224 command_runner.go:130] > NAME="Ubuntu"
	I0830 22:04:46.780673 1054224 command_runner.go:130] > VERSION_ID="22.04"
	I0830 22:04:46.780680 1054224 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0830 22:04:46.780686 1054224 command_runner.go:130] > VERSION_CODENAME=jammy
	I0830 22:04:46.780691 1054224 command_runner.go:130] > ID=ubuntu
	I0830 22:04:46.780696 1054224 command_runner.go:130] > ID_LIKE=debian
	I0830 22:04:46.780701 1054224 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0830 22:04:46.780707 1054224 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0830 22:04:46.780719 1054224 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0830 22:04:46.780728 1054224 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0830 22:04:46.780735 1054224 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0830 22:04:46.780790 1054224 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:04:46.780815 1054224 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:04:46.780830 1054224 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:04:46.780838 1054224 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 22:04:46.780850 1054224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:04:46.780912 1054224 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:04:46.781007 1054224 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:04:46.781018 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /etc/ssl/certs/9898252.pem
	I0830 22:04:46.781124 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:04:46.791920 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:04:46.822594 1054224 start.go:303] post-start completed in 171.892374ms
	I0830 22:04:46.823092 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875-m02
	I0830 22:04:46.844065 1054224 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/config.json ...
	I0830 22:04:46.844365 1054224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:04:46.844417 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:46.864222 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:04:46.967463 1054224 command_runner.go:130] > 18%!
	(MISSING)I0830 22:04:46.967558 1054224 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:04:46.973166 1054224 command_runner.go:130] > 160G
	I0830 22:04:46.973658 1054224 start.go:128] duration metric: createHost completed in 11.292284863s
	I0830 22:04:46.973701 1054224 start.go:83] releasing machines lock for "multinode-994875-m02", held for 11.292455972s
	I0830 22:04:46.973797 1054224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875-m02
	I0830 22:04:46.994929 1054224 out.go:177] * Found network options:
	I0830 22:04:46.996524 1054224 out.go:177]   - NO_PROXY=192.168.58.2
	W0830 22:04:46.998110 1054224 proxy.go:119] fail to check proxy env: Error ip not in block
	W0830 22:04:46.998167 1054224 proxy.go:119] fail to check proxy env: Error ip not in block
	I0830 22:04:46.998246 1054224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:04:46.998291 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:46.998541 1054224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:04:46.998594 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:04:47.020640 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:04:47.029399 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:04:47.287784 1054224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:04:47.287861 1054224 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0830 22:04:47.293579 1054224 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0830 22:04:47.293605 1054224 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0830 22:04:47.293613 1054224 command_runner.go:130] > Device: b3h/179d	Inode: 1301502     Links: 1
	I0830 22:04:47.293621 1054224 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:04:47.293647 1054224 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0830 22:04:47.293660 1054224 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0830 22:04:47.293667 1054224 command_runner.go:130] > Change: 2023-08-30 21:37:54.211838540 +0000
	I0830 22:04:47.293676 1054224 command_runner.go:130] >  Birth: 2023-08-30 21:37:54.211838540 +0000
	I0830 22:04:47.294076 1054224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:04:47.319211 1054224 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:04:47.319304 1054224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:04:47.360848 1054224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0830 22:04:47.360893 1054224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0830 22:04:47.360902 1054224 start.go:466] detecting cgroup driver to use...
	I0830 22:04:47.360953 1054224 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:04:47.361017 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:04:47.380502 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:04:47.394523 1054224 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:04:47.394599 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:04:47.410827 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:04:47.427010 1054224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0830 22:04:47.531764 1054224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:04:47.633573 1054224 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0830 22:04:47.633647 1054224 docker.go:212] disabling docker service ...
	I0830 22:04:47.633726 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:04:47.657077 1054224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:04:47.671956 1054224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:04:47.690197 1054224 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0830 22:04:47.820862 1054224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:04:47.940955 1054224 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0830 22:04:47.941042 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:04:47.960496 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:04:47.984200 1054224 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0830 22:04:47.985588 1054224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0830 22:04:47.985660 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:04:48.000157 1054224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0830 22:04:48.000270 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:04:48.014704 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:04:48.031293 1054224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:04:48.047410 1054224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:04:48.060869 1054224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:04:48.071832 1054224 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0830 22:04:48.073357 1054224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:04:48.084870 1054224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:04:48.188840 1054224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0830 22:04:48.317175 1054224 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0830 22:04:48.317287 1054224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0830 22:04:48.322106 1054224 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0830 22:04:48.322134 1054224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0830 22:04:48.322142 1054224 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0830 22:04:48.322151 1054224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:04:48.322162 1054224 command_runner.go:130] > Access: 2023-08-30 22:04:48.301800542 +0000
	I0830 22:04:48.322172 1054224 command_runner.go:130] > Modify: 2023-08-30 22:04:48.301800542 +0000
	I0830 22:04:48.322180 1054224 command_runner.go:130] > Change: 2023-08-30 22:04:48.301800542 +0000
	I0830 22:04:48.322187 1054224 command_runner.go:130] >  Birth: -
	I0830 22:04:48.322468 1054224 start.go:534] Will wait 60s for crictl version
	I0830 22:04:48.322528 1054224 ssh_runner.go:195] Run: which crictl
	I0830 22:04:48.326853 1054224 command_runner.go:130] > /usr/bin/crictl
	I0830 22:04:48.327305 1054224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:04:48.367686 1054224 command_runner.go:130] > Version:  0.1.0
	I0830 22:04:48.367903 1054224 command_runner.go:130] > RuntimeName:  cri-o
	I0830 22:04:48.368092 1054224 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0830 22:04:48.368260 1054224 command_runner.go:130] > RuntimeApiVersion:  v1
	I0830 22:04:48.371206 1054224 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0830 22:04:48.371293 1054224 ssh_runner.go:195] Run: crio --version
	I0830 22:04:48.416776 1054224 command_runner.go:130] > crio version 1.24.6
	I0830 22:04:48.416807 1054224 command_runner.go:130] > Version:          1.24.6
	I0830 22:04:48.416816 1054224 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0830 22:04:48.416822 1054224 command_runner.go:130] > GitTreeState:     clean
	I0830 22:04:48.416829 1054224 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0830 22:04:48.416837 1054224 command_runner.go:130] > GoVersion:        go1.18.2
	I0830 22:04:48.416843 1054224 command_runner.go:130] > Compiler:         gc
	I0830 22:04:48.416848 1054224 command_runner.go:130] > Platform:         linux/arm64
	I0830 22:04:48.416855 1054224 command_runner.go:130] > Linkmode:         dynamic
	I0830 22:04:48.416878 1054224 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 22:04:48.416883 1054224 command_runner.go:130] > SeccompEnabled:   true
	I0830 22:04:48.416891 1054224 command_runner.go:130] > AppArmorEnabled:  false
	I0830 22:04:48.416981 1054224 ssh_runner.go:195] Run: crio --version
	I0830 22:04:48.460037 1054224 command_runner.go:130] > crio version 1.24.6
	I0830 22:04:48.460058 1054224 command_runner.go:130] > Version:          1.24.6
	I0830 22:04:48.460069 1054224 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0830 22:04:48.460075 1054224 command_runner.go:130] > GitTreeState:     clean
	I0830 22:04:48.460083 1054224 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0830 22:04:48.460089 1054224 command_runner.go:130] > GoVersion:        go1.18.2
	I0830 22:04:48.460094 1054224 command_runner.go:130] > Compiler:         gc
	I0830 22:04:48.460099 1054224 command_runner.go:130] > Platform:         linux/arm64
	I0830 22:04:48.460105 1054224 command_runner.go:130] > Linkmode:         dynamic
	I0830 22:04:48.460119 1054224 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0830 22:04:48.460126 1054224 command_runner.go:130] > SeccompEnabled:   true
	I0830 22:04:48.460132 1054224 command_runner.go:130] > AppArmorEnabled:  false
	I0830 22:04:48.466242 1054224 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0830 22:04:48.468008 1054224 out.go:177]   - env NO_PROXY=192.168.58.2
	I0830 22:04:48.469594 1054224 cli_runner.go:164] Run: docker network inspect multinode-994875 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:04:48.487198 1054224 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0830 22:04:48.491934 1054224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:04:48.505357 1054224 certs.go:56] Setting up /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875 for IP: 192.168.58.3
	I0830 22:04:48.505390 1054224 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1c893f087ee62e9f919bfa6a6de84891ee8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:04:48.505521 1054224 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key
	I0830 22:04:48.505578 1054224 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key
	I0830 22:04:48.505596 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 22:04:48.505612 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 22:04:48.505626 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 22:04:48.505637 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 22:04:48.505693 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem (1338 bytes)
	W0830 22:04:48.505725 1054224 certs.go:433] ignoring /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825_empty.pem, impossibly tiny 0 bytes
	I0830 22:04:48.505737 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 22:04:48.505765 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem (1082 bytes)
	I0830 22:04:48.505793 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:04:48.505821 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem (1679 bytes)
	I0830 22:04:48.505867 1054224 certs.go:437] found cert: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:04:48.505907 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> /usr/share/ca-certificates/9898252.pem
	I0830 22:04:48.505923 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:04:48.505939 1054224 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem -> /usr/share/ca-certificates/989825.pem
	I0830 22:04:48.506276 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:04:48.536018 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:04:48.568239 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:04:48.600069 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 22:04:48.631301 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /usr/share/ca-certificates/9898252.pem (1708 bytes)
	I0830 22:04:48.661911 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:04:48.693026 1054224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/989825.pem --> /usr/share/ca-certificates/989825.pem (1338 bytes)
	I0830 22:04:48.722179 1054224 ssh_runner.go:195] Run: openssl version
	I0830 22:04:48.729112 1054224 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0830 22:04:48.729292 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/989825.pem && ln -fs /usr/share/ca-certificates/989825.pem /etc/ssl/certs/989825.pem"
	I0830 22:04:48.741628 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/989825.pem
	I0830 22:04:48.746429 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 22:04:48.746502 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 21:45 /usr/share/ca-certificates/989825.pem
	I0830 22:04:48.746565 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/989825.pem
	I0830 22:04:48.755517 1054224 command_runner.go:130] > 51391683
	I0830 22:04:48.755637 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/989825.pem /etc/ssl/certs/51391683.0"
	I0830 22:04:48.767741 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9898252.pem && ln -fs /usr/share/ca-certificates/9898252.pem /etc/ssl/certs/9898252.pem"
	I0830 22:04:48.779946 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9898252.pem
	I0830 22:04:48.784511 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 22:04:48.784543 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 21:45 /usr/share/ca-certificates/9898252.pem
	I0830 22:04:48.784600 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9898252.pem
	I0830 22:04:48.792725 1054224 command_runner.go:130] > 3ec20f2e
	I0830 22:04:48.793209 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9898252.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 22:04:48.804898 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:04:48.816732 1054224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:04:48.821824 1054224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:04:48.821857 1054224 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 21:38 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:04:48.821909 1054224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:04:48.830416 1054224 command_runner.go:130] > b5213941
	I0830 22:04:48.830875 1054224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:04:48.842469 1054224 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:04:48.846940 1054224 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:04:48.847021 1054224 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:04:48.847121 1054224 ssh_runner.go:195] Run: crio config
	I0830 22:04:48.908071 1054224 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0830 22:04:48.908096 1054224 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0830 22:04:48.908105 1054224 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0830 22:04:48.908109 1054224 command_runner.go:130] > #
	I0830 22:04:48.908118 1054224 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0830 22:04:48.908128 1054224 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0830 22:04:48.908136 1054224 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0830 22:04:48.908147 1054224 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0830 22:04:48.908154 1054224 command_runner.go:130] > # reload'.
	I0830 22:04:48.908162 1054224 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0830 22:04:48.908173 1054224 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0830 22:04:48.908180 1054224 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0830 22:04:48.908190 1054224 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0830 22:04:48.908195 1054224 command_runner.go:130] > [crio]
	I0830 22:04:48.908202 1054224 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0830 22:04:48.908212 1054224 command_runner.go:130] > # containers images, in this directory.
	I0830 22:04:48.908784 1054224 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0830 22:04:48.908803 1054224 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0830 22:04:48.908810 1054224 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0830 22:04:48.908817 1054224 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0830 22:04:48.908828 1054224 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0830 22:04:48.908836 1054224 command_runner.go:130] > # storage_driver = "vfs"
	I0830 22:04:48.908843 1054224 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0830 22:04:48.908850 1054224 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0830 22:04:48.908857 1054224 command_runner.go:130] > # storage_option = [
	I0830 22:04:48.908862 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.908872 1054224 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0830 22:04:48.908882 1054224 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0830 22:04:48.908892 1054224 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0830 22:04:48.908903 1054224 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0830 22:04:48.908910 1054224 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0830 22:04:48.908919 1054224 command_runner.go:130] > # always happen on a node reboot
	I0830 22:04:48.908925 1054224 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0830 22:04:48.908931 1054224 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0830 22:04:48.908940 1054224 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0830 22:04:48.908952 1054224 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0830 22:04:48.908968 1054224 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0830 22:04:48.908977 1054224 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0830 22:04:48.908987 1054224 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0830 22:04:48.908994 1054224 command_runner.go:130] > # internal_wipe = true
	I0830 22:04:48.909001 1054224 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0830 22:04:48.909012 1054224 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0830 22:04:48.909018 1054224 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0830 22:04:48.909025 1054224 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0830 22:04:48.909034 1054224 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0830 22:04:48.909043 1054224 command_runner.go:130] > [crio.api]
	I0830 22:04:48.909050 1054224 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0830 22:04:48.909056 1054224 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0830 22:04:48.909066 1054224 command_runner.go:130] > # IP address on which the stream server will listen.
	I0830 22:04:48.909074 1054224 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0830 22:04:48.909082 1054224 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0830 22:04:48.909090 1054224 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0830 22:04:48.909095 1054224 command_runner.go:130] > # stream_port = "0"
	I0830 22:04:48.909106 1054224 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0830 22:04:48.909111 1054224 command_runner.go:130] > # stream_enable_tls = false
	I0830 22:04:48.909120 1054224 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0830 22:04:48.909433 1054224 command_runner.go:130] > # stream_idle_timeout = ""
	I0830 22:04:48.909451 1054224 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0830 22:04:48.909459 1054224 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0830 22:04:48.909467 1054224 command_runner.go:130] > # minutes.
	I0830 22:04:48.909475 1054224 command_runner.go:130] > # stream_tls_cert = ""
	I0830 22:04:48.909483 1054224 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0830 22:04:48.909494 1054224 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0830 22:04:48.911178 1054224 command_runner.go:130] > # stream_tls_key = ""
	I0830 22:04:48.911198 1054224 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0830 22:04:48.911206 1054224 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0830 22:04:48.911213 1054224 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0830 22:04:48.911221 1054224 command_runner.go:130] > # stream_tls_ca = ""
	I0830 22:04:48.911234 1054224 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 22:04:48.911243 1054224 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0830 22:04:48.911252 1054224 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0830 22:04:48.911261 1054224 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0830 22:04:48.911274 1054224 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0830 22:04:48.911285 1054224 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0830 22:04:48.911290 1054224 command_runner.go:130] > [crio.runtime]
	I0830 22:04:48.911297 1054224 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0830 22:04:48.911306 1054224 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0830 22:04:48.911310 1054224 command_runner.go:130] > # "nofile=1024:2048"
	I0830 22:04:48.911318 1054224 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0830 22:04:48.911327 1054224 command_runner.go:130] > # default_ulimits = [
	I0830 22:04:48.911331 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911338 1054224 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0830 22:04:48.911345 1054224 command_runner.go:130] > # no_pivot = false
	I0830 22:04:48.911352 1054224 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0830 22:04:48.911361 1054224 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0830 22:04:48.911372 1054224 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0830 22:04:48.911379 1054224 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0830 22:04:48.911387 1054224 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0830 22:04:48.911398 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 22:04:48.911403 1054224 command_runner.go:130] > # conmon = ""
	I0830 22:04:48.911411 1054224 command_runner.go:130] > # Cgroup setting for conmon
	I0830 22:04:48.911425 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0830 22:04:48.911430 1054224 command_runner.go:130] > conmon_cgroup = "pod"
	I0830 22:04:48.911438 1054224 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0830 22:04:48.911447 1054224 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0830 22:04:48.911455 1054224 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0830 22:04:48.911462 1054224 command_runner.go:130] > # conmon_env = [
	I0830 22:04:48.911466 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911473 1054224 command_runner.go:130] > # Additional environment variables to set for all the
	I0830 22:04:48.911481 1054224 command_runner.go:130] > # containers. These are overridden if set in the
	I0830 22:04:48.911488 1054224 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0830 22:04:48.911495 1054224 command_runner.go:130] > # default_env = [
	I0830 22:04:48.911499 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911506 1054224 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0830 22:04:48.911514 1054224 command_runner.go:130] > # selinux = false
	I0830 22:04:48.911522 1054224 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0830 22:04:48.911530 1054224 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0830 22:04:48.911540 1054224 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0830 22:04:48.911546 1054224 command_runner.go:130] > # seccomp_profile = ""
	I0830 22:04:48.911553 1054224 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0830 22:04:48.911563 1054224 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0830 22:04:48.911570 1054224 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0830 22:04:48.911577 1054224 command_runner.go:130] > # which might increase security.
	I0830 22:04:48.911583 1054224 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0830 22:04:48.911593 1054224 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0830 22:04:48.911601 1054224 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0830 22:04:48.911612 1054224 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0830 22:04:48.911619 1054224 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0830 22:04:48.911625 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:04:48.911633 1054224 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0830 22:04:48.911640 1054224 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0830 22:04:48.911648 1054224 command_runner.go:130] > # the cgroup blockio controller.
	I0830 22:04:48.911653 1054224 command_runner.go:130] > # blockio_config_file = ""
	I0830 22:04:48.911661 1054224 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0830 22:04:48.911666 1054224 command_runner.go:130] > # irqbalance daemon.
	I0830 22:04:48.911674 1054224 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0830 22:04:48.911682 1054224 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0830 22:04:48.911692 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:04:48.911697 1054224 command_runner.go:130] > # rdt_config_file = ""
	I0830 22:04:48.911704 1054224 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0830 22:04:48.911711 1054224 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0830 22:04:48.911719 1054224 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0830 22:04:48.911733 1054224 command_runner.go:130] > # separate_pull_cgroup = ""
	I0830 22:04:48.911741 1054224 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0830 22:04:48.911765 1054224 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0830 22:04:48.911775 1054224 command_runner.go:130] > # will be added.
	I0830 22:04:48.911780 1054224 command_runner.go:130] > # default_capabilities = [
	I0830 22:04:48.911784 1054224 command_runner.go:130] > # 	"CHOWN",
	I0830 22:04:48.911792 1054224 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0830 22:04:48.911797 1054224 command_runner.go:130] > # 	"FSETID",
	I0830 22:04:48.911803 1054224 command_runner.go:130] > # 	"FOWNER",
	I0830 22:04:48.911811 1054224 command_runner.go:130] > # 	"SETGID",
	I0830 22:04:48.911815 1054224 command_runner.go:130] > # 	"SETUID",
	I0830 22:04:48.911822 1054224 command_runner.go:130] > # 	"SETPCAP",
	I0830 22:04:48.911828 1054224 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0830 22:04:48.911835 1054224 command_runner.go:130] > # 	"KILL",
	I0830 22:04:48.911839 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911847 1054224 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0830 22:04:48.911859 1054224 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0830 22:04:48.911865 1054224 command_runner.go:130] > # add_inheritable_capabilities = true
	I0830 22:04:48.911875 1054224 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0830 22:04:48.911882 1054224 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 22:04:48.911888 1054224 command_runner.go:130] > # default_sysctls = [
	I0830 22:04:48.911894 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911900 1054224 command_runner.go:130] > # List of devices on the host that a
	I0830 22:04:48.911909 1054224 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0830 22:04:48.911914 1054224 command_runner.go:130] > # allowed_devices = [
	I0830 22:04:48.911919 1054224 command_runner.go:130] > # 	"/dev/fuse",
	I0830 22:04:48.911923 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911930 1054224 command_runner.go:130] > # List of additional devices. specified as
	I0830 22:04:48.911947 1054224 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0830 22:04:48.911956 1054224 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0830 22:04:48.911964 1054224 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0830 22:04:48.911969 1054224 command_runner.go:130] > # additional_devices = [
	I0830 22:04:48.911975 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.911982 1054224 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0830 22:04:48.911988 1054224 command_runner.go:130] > # cdi_spec_dirs = [
	I0830 22:04:48.911993 1054224 command_runner.go:130] > # 	"/etc/cdi",
	I0830 22:04:48.912000 1054224 command_runner.go:130] > # 	"/var/run/cdi",
	I0830 22:04:48.912004 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.912020 1054224 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0830 22:04:48.912028 1054224 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0830 22:04:48.912035 1054224 command_runner.go:130] > # Defaults to false.
	I0830 22:04:48.912042 1054224 command_runner.go:130] > # device_ownership_from_security_context = false
	I0830 22:04:48.912050 1054224 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0830 22:04:48.912059 1054224 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0830 22:04:48.912064 1054224 command_runner.go:130] > # hooks_dir = [
	I0830 22:04:48.912070 1054224 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0830 22:04:48.912076 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.912084 1054224 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0830 22:04:48.912095 1054224 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0830 22:04:48.912101 1054224 command_runner.go:130] > # its default mounts from the following two files:
	I0830 22:04:48.912105 1054224 command_runner.go:130] > #
	I0830 22:04:48.912115 1054224 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0830 22:04:48.912125 1054224 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0830 22:04:48.912132 1054224 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0830 22:04:48.912139 1054224 command_runner.go:130] > #
	I0830 22:04:48.912146 1054224 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0830 22:04:48.912153 1054224 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0830 22:04:48.912163 1054224 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0830 22:04:48.912174 1054224 command_runner.go:130] > #      only add mounts it finds in this file.
	I0830 22:04:48.912178 1054224 command_runner.go:130] > #
	I0830 22:04:48.912186 1054224 command_runner.go:130] > # default_mounts_file = ""
	I0830 22:04:48.912192 1054224 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0830 22:04:48.912203 1054224 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0830 22:04:48.912207 1054224 command_runner.go:130] > # pids_limit = 0
	I0830 22:04:48.912222 1054224 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0830 22:04:48.912230 1054224 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0830 22:04:48.912240 1054224 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0830 22:04:48.912252 1054224 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0830 22:04:48.912259 1054224 command_runner.go:130] > # log_size_max = -1
	I0830 22:04:48.912268 1054224 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0830 22:04:48.912273 1054224 command_runner.go:130] > # log_to_journald = false
	I0830 22:04:48.912283 1054224 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0830 22:04:48.912289 1054224 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0830 22:04:48.912298 1054224 command_runner.go:130] > # Path to directory for container attach sockets.
	I0830 22:04:48.912306 1054224 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0830 22:04:48.912313 1054224 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0830 22:04:48.912318 1054224 command_runner.go:130] > # bind_mount_prefix = ""
	I0830 22:04:48.912327 1054224 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0830 22:04:48.912336 1054224 command_runner.go:130] > # read_only = false
	I0830 22:04:48.912343 1054224 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0830 22:04:48.912351 1054224 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0830 22:04:48.912358 1054224 command_runner.go:130] > # live configuration reload.
	I0830 22:04:48.912365 1054224 command_runner.go:130] > # log_level = "info"
	I0830 22:04:48.912373 1054224 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0830 22:04:48.912382 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:04:48.912387 1054224 command_runner.go:130] > # log_filter = ""
	I0830 22:04:48.912396 1054224 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0830 22:04:48.912406 1054224 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0830 22:04:48.912411 1054224 command_runner.go:130] > # separated by comma.
	I0830 22:04:48.912417 1054224 command_runner.go:130] > # uid_mappings = ""
	I0830 22:04:48.912424 1054224 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0830 22:04:48.912447 1054224 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0830 22:04:48.912457 1054224 command_runner.go:130] > # separated by comma.
	I0830 22:04:48.912462 1054224 command_runner.go:130] > # gid_mappings = ""
	I0830 22:04:48.912470 1054224 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0830 22:04:48.912481 1054224 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 22:04:48.912488 1054224 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 22:04:48.912496 1054224 command_runner.go:130] > # minimum_mappable_uid = -1
	I0830 22:04:48.912503 1054224 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0830 22:04:48.912513 1054224 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0830 22:04:48.912521 1054224 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0830 22:04:48.912529 1054224 command_runner.go:130] > # minimum_mappable_gid = -1
	I0830 22:04:48.912537 1054224 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0830 22:04:48.912544 1054224 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0830 22:04:48.912555 1054224 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0830 22:04:48.912560 1054224 command_runner.go:130] > # ctr_stop_timeout = 30
	I0830 22:04:48.912569 1054224 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0830 22:04:48.912576 1054224 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0830 22:04:48.912582 1054224 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0830 22:04:48.912589 1054224 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0830 22:04:48.912596 1054224 command_runner.go:130] > # drop_infra_ctr = true
	I0830 22:04:48.912604 1054224 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0830 22:04:48.912613 1054224 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0830 22:04:48.912622 1054224 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0830 22:04:48.912629 1054224 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0830 22:04:48.912637 1054224 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0830 22:04:48.912646 1054224 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0830 22:04:48.912651 1054224 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0830 22:04:48.912660 1054224 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0830 22:04:48.912670 1054224 command_runner.go:130] > # pinns_path = ""
	I0830 22:04:48.912678 1054224 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0830 22:04:48.912688 1054224 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0830 22:04:48.912701 1054224 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0830 22:04:48.912707 1054224 command_runner.go:130] > # default_runtime = "runc"
	I0830 22:04:48.912715 1054224 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0830 22:04:48.912724 1054224 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0830 22:04:48.912740 1054224 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0830 22:04:48.912746 1054224 command_runner.go:130] > # creation as a file is not desired either.
	I0830 22:04:48.912757 1054224 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0830 22:04:48.912766 1054224 command_runner.go:130] > # the hostname is being managed dynamically.
	I0830 22:04:48.912771 1054224 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0830 22:04:48.912775 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.912783 1054224 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0830 22:04:48.912794 1054224 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0830 22:04:48.912802 1054224 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0830 22:04:48.912813 1054224 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0830 22:04:48.912817 1054224 command_runner.go:130] > #
	I0830 22:04:48.912823 1054224 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0830 22:04:48.912829 1054224 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0830 22:04:48.912836 1054224 command_runner.go:130] > #  runtime_type = "oci"
	I0830 22:04:48.912842 1054224 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0830 22:04:48.912850 1054224 command_runner.go:130] > #  privileged_without_host_devices = false
	I0830 22:04:48.912855 1054224 command_runner.go:130] > #  allowed_annotations = []
	I0830 22:04:48.912859 1054224 command_runner.go:130] > # Where:
	I0830 22:04:48.912866 1054224 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0830 22:04:48.912876 1054224 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0830 22:04:48.912884 1054224 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0830 22:04:48.912894 1054224 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0830 22:04:48.912899 1054224 command_runner.go:130] > #   in $PATH.
	I0830 22:04:48.912906 1054224 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0830 22:04:48.912912 1054224 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0830 22:04:48.912921 1054224 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0830 22:04:48.912926 1054224 command_runner.go:130] > #   state.
	I0830 22:04:48.912936 1054224 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0830 22:04:48.912945 1054224 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0830 22:04:48.912954 1054224 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0830 22:04:48.912969 1054224 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0830 22:04:48.912977 1054224 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0830 22:04:48.912987 1054224 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0830 22:04:48.912994 1054224 command_runner.go:130] > #   The currently recognized values are:
	I0830 22:04:48.913003 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0830 22:04:48.913026 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0830 22:04:48.913039 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0830 22:04:48.913046 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0830 22:04:48.913057 1054224 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0830 22:04:48.913065 1054224 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0830 22:04:48.913077 1054224 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0830 22:04:48.913085 1054224 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0830 22:04:48.913092 1054224 command_runner.go:130] > #   should be moved to the container's cgroup
	I0830 22:04:48.913099 1054224 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0830 22:04:48.913107 1054224 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0830 22:04:48.913112 1054224 command_runner.go:130] > runtime_type = "oci"
	I0830 22:04:48.913118 1054224 command_runner.go:130] > runtime_root = "/run/runc"
	I0830 22:04:48.913202 1054224 command_runner.go:130] > runtime_config_path = ""
	I0830 22:04:48.913219 1054224 command_runner.go:130] > monitor_path = ""
	I0830 22:04:48.913224 1054224 command_runner.go:130] > monitor_cgroup = ""
	I0830 22:04:48.913231 1054224 command_runner.go:130] > monitor_exec_cgroup = ""
	I0830 22:04:48.913247 1054224 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0830 22:04:48.913254 1054224 command_runner.go:130] > # running containers
	I0830 22:04:48.913260 1054224 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0830 22:04:48.913268 1054224 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0830 22:04:48.913278 1054224 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0830 22:04:48.913285 1054224 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0830 22:04:48.913291 1054224 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0830 22:04:48.913299 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0830 22:04:48.913305 1054224 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0830 22:04:48.913312 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0830 22:04:48.913320 1054224 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0830 22:04:48.913326 1054224 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0830 22:04:48.913334 1054224 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0830 22:04:48.913342 1054224 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0830 22:04:48.913351 1054224 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0830 22:04:48.913364 1054224 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0830 22:04:48.913373 1054224 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0830 22:04:48.913384 1054224 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0830 22:04:48.913395 1054224 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0830 22:04:48.913415 1054224 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0830 22:04:48.913425 1054224 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0830 22:04:48.913437 1054224 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0830 22:04:48.913442 1054224 command_runner.go:130] > # Example:
	I0830 22:04:48.913448 1054224 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0830 22:04:48.913456 1054224 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0830 22:04:48.913462 1054224 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0830 22:04:48.913470 1054224 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0830 22:04:48.913475 1054224 command_runner.go:130] > # cpuset = 0
	I0830 22:04:48.913480 1054224 command_runner.go:130] > # cpushares = "0-1"
	I0830 22:04:48.913484 1054224 command_runner.go:130] > # Where:
	I0830 22:04:48.913496 1054224 command_runner.go:130] > # The workload name is workload-type.
	I0830 22:04:48.913505 1054224 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0830 22:04:48.913514 1054224 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0830 22:04:48.913522 1054224 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0830 22:04:48.913532 1054224 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0830 22:04:48.913542 1054224 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0830 22:04:48.913546 1054224 command_runner.go:130] > # 
	I0830 22:04:48.913557 1054224 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0830 22:04:48.913563 1054224 command_runner.go:130] > #
	I0830 22:04:48.913570 1054224 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0830 22:04:48.913577 1054224 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0830 22:04:48.913586 1054224 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0830 22:04:48.913596 1054224 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0830 22:04:48.913603 1054224 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0830 22:04:48.913608 1054224 command_runner.go:130] > [crio.image]
	I0830 22:04:48.913617 1054224 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0830 22:04:48.913624 1054224 command_runner.go:130] > # default_transport = "docker://"
	I0830 22:04:48.913632 1054224 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0830 22:04:48.913641 1054224 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0830 22:04:48.913647 1054224 command_runner.go:130] > # global_auth_file = ""
	I0830 22:04:48.913654 1054224 command_runner.go:130] > # The image used to instantiate infra containers.
	I0830 22:04:48.913662 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:04:48.913668 1054224 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0830 22:04:48.913678 1054224 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0830 22:04:48.913743 1054224 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0830 22:04:48.913756 1054224 command_runner.go:130] > # This option supports live configuration reload.
	I0830 22:04:48.913762 1054224 command_runner.go:130] > # pause_image_auth_file = ""
	I0830 22:04:48.913769 1054224 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0830 22:04:48.913777 1054224 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0830 22:04:48.913786 1054224 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0830 22:04:48.913796 1054224 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0830 22:04:48.913803 1054224 command_runner.go:130] > # pause_command = "/pause"
	I0830 22:04:48.913811 1054224 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0830 22:04:48.913819 1054224 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0830 22:04:48.913830 1054224 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0830 22:04:48.913837 1054224 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0830 22:04:48.913846 1054224 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0830 22:04:48.913853 1054224 command_runner.go:130] > # signature_policy = ""
	I0830 22:04:48.913860 1054224 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0830 22:04:48.913870 1054224 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0830 22:04:48.913875 1054224 command_runner.go:130] > # changing them here.
	I0830 22:04:48.913881 1054224 command_runner.go:130] > # insecure_registries = [
	I0830 22:04:48.913887 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.913895 1054224 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0830 22:04:48.913904 1054224 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0830 22:04:48.913909 1054224 command_runner.go:130] > # image_volumes = "mkdir"
	I0830 22:04:48.913916 1054224 command_runner.go:130] > # Temporary directory to use for storing big files
	I0830 22:04:48.913924 1054224 command_runner.go:130] > # big_files_temporary_dir = ""
	I0830 22:04:48.913934 1054224 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0830 22:04:48.913941 1054224 command_runner.go:130] > # CNI plugins.
	I0830 22:04:48.913946 1054224 command_runner.go:130] > [crio.network]
	I0830 22:04:48.913953 1054224 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0830 22:04:48.913962 1054224 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0830 22:04:48.913968 1054224 command_runner.go:130] > # cni_default_network = ""
	I0830 22:04:48.913977 1054224 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0830 22:04:48.913983 1054224 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0830 22:04:48.913991 1054224 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0830 22:04:48.913996 1054224 command_runner.go:130] > # plugin_dirs = [
	I0830 22:04:48.914001 1054224 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0830 22:04:48.914007 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.914015 1054224 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0830 22:04:48.914022 1054224 command_runner.go:130] > [crio.metrics]
	I0830 22:04:48.914028 1054224 command_runner.go:130] > # Globally enable or disable metrics support.
	I0830 22:04:48.914032 1054224 command_runner.go:130] > # enable_metrics = false
	I0830 22:04:48.914040 1054224 command_runner.go:130] > # Specify enabled metrics collectors.
	I0830 22:04:48.914047 1054224 command_runner.go:130] > # Per default all metrics are enabled.
	I0830 22:04:48.914058 1054224 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0830 22:04:48.914065 1054224 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0830 22:04:48.914072 1054224 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0830 22:04:48.914077 1054224 command_runner.go:130] > # metrics_collectors = [
	I0830 22:04:48.914082 1054224 command_runner.go:130] > # 	"operations",
	I0830 22:04:48.914090 1054224 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0830 22:04:48.914096 1054224 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0830 22:04:48.914103 1054224 command_runner.go:130] > # 	"operations_errors",
	I0830 22:04:48.914108 1054224 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0830 22:04:48.914114 1054224 command_runner.go:130] > # 	"image_pulls_by_name",
	I0830 22:04:48.914120 1054224 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0830 22:04:48.914127 1054224 command_runner.go:130] > # 	"image_pulls_failures",
	I0830 22:04:48.914133 1054224 command_runner.go:130] > # 	"image_pulls_successes",
	I0830 22:04:48.914138 1054224 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0830 22:04:48.914143 1054224 command_runner.go:130] > # 	"image_layer_reuse",
	I0830 22:04:48.914170 1054224 command_runner.go:130] > # 	"containers_oom_total",
	I0830 22:04:48.914176 1054224 command_runner.go:130] > # 	"containers_oom",
	I0830 22:04:48.914181 1054224 command_runner.go:130] > # 	"processes_defunct",
	I0830 22:04:48.914188 1054224 command_runner.go:130] > # 	"operations_total",
	I0830 22:04:48.914193 1054224 command_runner.go:130] > # 	"operations_latency_seconds",
	I0830 22:04:48.914199 1054224 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0830 22:04:48.914207 1054224 command_runner.go:130] > # 	"operations_errors_total",
	I0830 22:04:48.914212 1054224 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0830 22:04:48.914218 1054224 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0830 22:04:48.914224 1054224 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0830 22:04:48.914229 1054224 command_runner.go:130] > # 	"image_pulls_success_total",
	I0830 22:04:48.914239 1054224 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0830 22:04:48.914244 1054224 command_runner.go:130] > # 	"containers_oom_count_total",
	I0830 22:04:48.914248 1054224 command_runner.go:130] > # ]
	I0830 22:04:48.914255 1054224 command_runner.go:130] > # The port on which the metrics server will listen.
	I0830 22:04:48.914264 1054224 command_runner.go:130] > # metrics_port = 9090
	I0830 22:04:48.914271 1054224 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0830 22:04:48.914278 1054224 command_runner.go:130] > # metrics_socket = ""
	I0830 22:04:48.914286 1054224 command_runner.go:130] > # The certificate for the secure metrics server.
	I0830 22:04:48.914295 1054224 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0830 22:04:48.914304 1054224 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0830 22:04:48.914309 1054224 command_runner.go:130] > # certificate on any modification event.
	I0830 22:04:48.914314 1054224 command_runner.go:130] > # metrics_cert = ""
	I0830 22:04:48.914322 1054224 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0830 22:04:48.914329 1054224 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0830 22:04:48.914334 1054224 command_runner.go:130] > # metrics_key = ""
	I0830 22:04:48.914344 1054224 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0830 22:04:48.914349 1054224 command_runner.go:130] > [crio.tracing]
	I0830 22:04:48.914355 1054224 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0830 22:04:48.914363 1054224 command_runner.go:130] > # enable_tracing = false
	I0830 22:04:48.914371 1054224 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0830 22:04:48.914379 1054224 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0830 22:04:48.914385 1054224 command_runner.go:130] > # Number of samples to collect per million spans.
	I0830 22:04:48.914391 1054224 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0830 22:04:48.914398 1054224 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0830 22:04:48.914403 1054224 command_runner.go:130] > [crio.stats]
	I0830 22:04:48.914412 1054224 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0830 22:04:48.914419 1054224 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0830 22:04:48.914427 1054224 command_runner.go:130] > # stats_collection_period = 0
	I0830 22:04:48.916334 1054224 command_runner.go:130] ! time="2023-08-30 22:04:48.905339126Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0830 22:04:48.916360 1054224 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0830 22:04:48.916718 1054224 cni.go:84] Creating CNI manager for ""
	I0830 22:04:48.916734 1054224 cni.go:136] 2 nodes found, recommending kindnet
	I0830 22:04:48.916744 1054224 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:04:48.916765 1054224 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994875 NodeName:multinode-994875-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:04:48.916898 1054224 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994875-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:04:48.916979 1054224 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-994875-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:04:48.917047 1054224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:04:48.926705 1054224 command_runner.go:130] > kubeadm
	I0830 22:04:48.926721 1054224 command_runner.go:130] > kubectl
	I0830 22:04:48.926726 1054224 command_runner.go:130] > kubelet
	I0830 22:04:48.927907 1054224 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:04:48.927986 1054224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0830 22:04:48.938891 1054224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0830 22:04:48.960604 1054224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:04:48.982482 1054224 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0830 22:04:48.987024 1054224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:04:49.001056 1054224 host.go:66] Checking if "multinode-994875" exists ...
	I0830 22:04:49.001416 1054224 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:04:49.001416 1054224 start.go:301] JoinCluster: &{Name:multinode-994875 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-994875 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:04:49.001501 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0830 22:04:49.001554 1054224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:04:49.023122 1054224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:04:49.195666 1054224 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lu0ai6.9h8t2to9jao6tlvz --discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 
	I0830 22:04:49.195708 1054224 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 22:04:49.195739 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu0ai6.9h8t2to9jao6tlvz --discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-994875-m02"
	I0830 22:04:49.242435 1054224 command_runner.go:130] > [preflight] Running pre-flight checks
	I0830 22:04:49.288750 1054224 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0830 22:04:49.288770 1054224 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1043-aws
	I0830 22:04:49.288776 1054224 command_runner.go:130] > OS: Linux
	I0830 22:04:49.288783 1054224 command_runner.go:130] > CGROUPS_CPU: enabled
	I0830 22:04:49.288790 1054224 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0830 22:04:49.288796 1054224 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0830 22:04:49.288802 1054224 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0830 22:04:49.288807 1054224 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0830 22:04:49.288813 1054224 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0830 22:04:49.288825 1054224 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0830 22:04:49.288831 1054224 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0830 22:04:49.288837 1054224 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0830 22:04:49.398037 1054224 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0830 22:04:49.398060 1054224 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0830 22:04:49.430220 1054224 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:04:49.430496 1054224 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:04:49.430508 1054224 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0830 22:04:49.539868 1054224 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0830 22:04:52.059574 1054224 command_runner.go:130] > This node has joined the cluster:
	I0830 22:04:52.059597 1054224 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0830 22:04:52.059605 1054224 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0830 22:04:52.059616 1054224 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0830 22:04:52.063146 1054224 command_runner.go:130] ! W0830 22:04:49.241803    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0830 22:04:52.063189 1054224 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 22:04:52.063202 1054224 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:04:52.063225 1054224 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lu0ai6.9h8t2to9jao6tlvz --discovery-token-ca-cert-hash sha256:dbb2d1601005e0eb74ea76f1ea00d2a8cf049d471533cfdd7a067e3844af0231 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-994875-m02": (2.867466625s)
	I0830 22:04:52.063244 1054224 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0830 22:04:52.288919 1054224 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0830 22:04:52.288944 1054224 start.go:303] JoinCluster complete in 3.287527625s
	I0830 22:04:52.288955 1054224 cni.go:84] Creating CNI manager for ""
	I0830 22:04:52.288960 1054224 cni.go:136] 2 nodes found, recommending kindnet
	I0830 22:04:52.289042 1054224 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0830 22:04:52.294547 1054224 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0830 22:04:52.294581 1054224 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0830 22:04:52.294590 1054224 command_runner.go:130] > Device: 3ah/58d	Inode: 1305245     Links: 1
	I0830 22:04:52.294597 1054224 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0830 22:04:52.294605 1054224 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0830 22:04:52.294611 1054224 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0830 22:04:52.294617 1054224 command_runner.go:130] > Change: 2023-08-30 21:37:54.859837350 +0000
	I0830 22:04:52.294623 1054224 command_runner.go:130] >  Birth: 2023-08-30 21:37:54.819837423 +0000
	I0830 22:04:52.294668 1054224 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0830 22:04:52.294681 1054224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0830 22:04:52.317762 1054224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0830 22:04:52.632206 1054224 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0830 22:04:52.637592 1054224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0830 22:04:52.641760 1054224 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0830 22:04:52.655102 1054224 command_runner.go:130] > daemonset.apps/kindnet configured
	I0830 22:04:52.661547 1054224 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:52.661862 1054224 kapi.go:59] client config for multinode-994875: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:04:52.662201 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0830 22:04:52.662215 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:52.662226 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:52.662234 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:52.664767 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:52.664791 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:52.664801 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:52.664808 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:52.664815 1054224 round_trippers.go:580]     Content-Length: 291
	I0830 22:04:52.664821 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:52 GMT
	I0830 22:04:52.664828 1054224 round_trippers.go:580]     Audit-Id: 5ac639c9-122a-4ef3-aac1-d0a4a7672066
	I0830 22:04:52.664835 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:52.664846 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:52.664872 1054224 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8bcc123f-4915-4961-a683-1857b6b65ea4","resourceVersion":"457","creationTimestamp":"2023-08-30T22:03:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0830 22:04:52.664964 1054224 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-994875" context rescaled to 1 replicas
	I0830 22:04:52.664992 1054224 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0830 22:04:52.668125 1054224 out.go:177] * Verifying Kubernetes components...
	I0830 22:04:52.670502 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:04:52.685487 1054224 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:04:52.685894 1054224 kapi.go:59] client config for multinode-994875: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.crt", KeyFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/profiles/multinode-994875/client.key", CAFile:"/home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723840), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 22:04:52.686166 1054224 node_ready.go:35] waiting up to 6m0s for node "multinode-994875-m02" to be "Ready" ...
	I0830 22:04:52.686240 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:52.686249 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:52.686258 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:52.686271 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:52.688803 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:52.688827 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:52.688835 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:52.688842 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:52.688854 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:52.688861 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:52 GMT
	I0830 22:04:52.688872 1054224 round_trippers.go:580]     Audit-Id: e4768f2e-6f2b-4086-8676-687e7ac68d33
	I0830 22:04:52.688879 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:52.689051 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:52.689494 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:52.689508 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:52.689517 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:52.689524 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:52.692028 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:52.692050 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:52.692061 1054224 round_trippers.go:580]     Audit-Id: 0d04b798-e40d-455b-a950-a38bedac1244
	I0830 22:04:52.692068 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:52.692075 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:52.692085 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:52.692092 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:52.692099 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:52 GMT
	I0830 22:04:52.692205 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:53.192823 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:53.192846 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:53.192857 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:53.192865 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:53.195464 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:53.195489 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:53.195497 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:53.195504 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:53.195510 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:53.195518 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:53 GMT
	I0830 22:04:53.195524 1054224 round_trippers.go:580]     Audit-Id: e86f0806-4817-40d8-a159-bc7d7c6efb79
	I0830 22:04:53.195531 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:53.195756 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:53.692793 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:53.692813 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:53.692837 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:53.692846 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:53.695642 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:53.695678 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:53.695688 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:53.695695 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:53.695702 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:53.695708 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:53.695715 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:53 GMT
	I0830 22:04:53.695727 1054224 round_trippers.go:580]     Audit-Id: 54d93df5-cb98-4f21-9435-6fb9332c60e7
	I0830 22:04:53.695879 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:54.192778 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:54.192806 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:54.192817 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:54.192824 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:54.195360 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:54.195384 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:54.195395 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:54.195402 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:54 GMT
	I0830 22:04:54.195409 1054224 round_trippers.go:580]     Audit-Id: 4235d172-d185-4f30-b060-50531e9074b7
	I0830 22:04:54.195416 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:54.195422 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:54.195429 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:54.195737 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:54.693528 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:54.693562 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:54.693573 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:54.693581 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:54.696664 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:54.696691 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:54.696701 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:54 GMT
	I0830 22:04:54.696708 1054224 round_trippers.go:580]     Audit-Id: 16eafe93-83dd-41e9-9899-8a8fda8e0af5
	I0830 22:04:54.696715 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:54.696722 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:54.696728 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:54.696735 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:54.696831 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"494","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I0830 22:04:54.697242 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:04:55.193604 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:55.193626 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:55.193649 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:55.193657 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:55.196311 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:55.196333 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:55.196342 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:55.196349 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:55.196356 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:55 GMT
	I0830 22:04:55.196363 1054224 round_trippers.go:580]     Audit-Id: d99955bb-64c0-4312-8d06-d6eb282abb29
	I0830 22:04:55.196369 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:55.196376 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:55.196521 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:55.693661 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:55.693681 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:55.693691 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:55.693700 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:55.697193 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:04:55.697215 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:55.697223 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:55 GMT
	I0830 22:04:55.697230 1054224 round_trippers.go:580]     Audit-Id: a17fe6a3-09ed-4ce0-a084-110e7bac3261
	I0830 22:04:55.697237 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:55.697243 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:55.697250 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:55.697256 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:55.697378 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:56.193565 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:56.193590 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:56.193601 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:56.193608 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:56.196050 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:56.196075 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:56.196100 1054224 round_trippers.go:580]     Audit-Id: a75bab1c-6e00-48a3-b3d7-d06de5160178
	I0830 22:04:56.196117 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:56.196125 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:56.196134 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:56.196143 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:56.196151 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:56 GMT
	I0830 22:04:56.196532 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:56.693003 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:56.693032 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:56.693042 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:56.693050 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:56.695580 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:56.695601 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:56.695615 1054224 round_trippers.go:580]     Audit-Id: f0b846d6-b755-4569-8b80-235db3b4411d
	I0830 22:04:56.695623 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:56.695630 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:56.695636 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:56.695645 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:56.695651 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:56 GMT
	I0830 22:04:56.695806 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:57.192780 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:57.192805 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:57.192815 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:57.192822 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:57.195495 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:57.195516 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:57.195525 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:57.195532 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:57.195539 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:57 GMT
	I0830 22:04:57.195545 1054224 round_trippers.go:580]     Audit-Id: 41d8cf19-736b-4142-93c3-d7b74fde04a1
	I0830 22:04:57.195552 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:57.195558 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:57.195717 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:57.196084 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:04:57.692793 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:57.692820 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:57.692831 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:57.692838 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:57.695746 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:57.695775 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:57.695785 1054224 round_trippers.go:580]     Audit-Id: 34b787ff-09eb-4011-a0f2-04ccb24f5859
	I0830 22:04:57.695792 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:57.695798 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:57.695805 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:57.695812 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:57.695818 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:57 GMT
	I0830 22:04:57.696277 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:58.192835 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:58.192858 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:58.192869 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:58.192877 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:58.195425 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:58.195450 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:58.195459 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:58.195466 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:58.195473 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:58.195480 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:58 GMT
	I0830 22:04:58.195486 1054224 round_trippers.go:580]     Audit-Id: e5a5cfb8-d0b5-4571-9d03-7bc41214d55a
	I0830 22:04:58.195492 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:58.195704 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:58.693469 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:58.693497 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:58.693521 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:58.693529 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:58.696187 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:58.696211 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:58.696220 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:58.696227 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:58.696235 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:58 GMT
	I0830 22:04:58.696243 1054224 round_trippers.go:580]     Audit-Id: e9239dcf-4373-4e23-8466-0fa30089e0e3
	I0830 22:04:58.696253 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:58.696260 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:58.696385 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:59.193323 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:59.193350 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:59.193360 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:59.193368 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:59.196022 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:59.196049 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:59.196059 1054224 round_trippers.go:580]     Audit-Id: 1ce805c6-987a-4be5-a44c-6ec165cfacae
	I0830 22:04:59.196066 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:59.196072 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:59.196079 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:59.196089 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:59.196100 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:59 GMT
	I0830 22:04:59.196442 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:04:59.196813 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:04:59.693614 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:04:59.693637 1054224 round_trippers.go:469] Request Headers:
	I0830 22:04:59.693648 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:04:59.693655 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:04:59.695990 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:04:59.696016 1054224 round_trippers.go:577] Response Headers:
	I0830 22:04:59.696026 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:04:59.696033 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:04:59.696042 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:04:59 GMT
	I0830 22:04:59.696048 1054224 round_trippers.go:580]     Audit-Id: 5f459bb8-6fe7-4fae-9bc0-29f6333657a8
	I0830 22:04:59.696055 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:04:59.696066 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:04:59.696235 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:00.193521 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:00.193552 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:00.193563 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:00.193571 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:00.197645 1054224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0830 22:05:00.197671 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:00.197681 1054224 round_trippers.go:580]     Audit-Id: 436e36e2-d9f8-4279-9d4b-4b228e65de23
	I0830 22:05:00.197690 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:00.197698 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:00.197705 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:00.197712 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:00.197719 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:00 GMT
	I0830 22:05:00.197847 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:00.692863 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:00.692887 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:00.692897 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:00.692905 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:00.695680 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:00.695706 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:00.695715 1054224 round_trippers.go:580]     Audit-Id: 13dc6c15-0356-42b3-a784-5368ad21beb8
	I0830 22:05:00.695722 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:00.695730 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:00.695736 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:00.695744 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:00.695753 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:00 GMT
	I0830 22:05:00.695850 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:01.192789 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:01.192813 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:01.192823 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:01.192830 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:01.195528 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:01.195549 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:01.195558 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:01.195566 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:01 GMT
	I0830 22:05:01.195573 1054224 round_trippers.go:580]     Audit-Id: db0082b8-a299-4f2f-b676-2956e3730587
	I0830 22:05:01.195580 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:01.195587 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:01.195594 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:01.196206 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:01.693281 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:01.693305 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:01.693316 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:01.693323 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:01.695805 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:01.695827 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:01.695836 1054224 round_trippers.go:580]     Audit-Id: a62862cc-2e7c-4787-94f9-27c3eff52f38
	I0830 22:05:01.695842 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:01.695849 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:01.695855 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:01.695862 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:01.695870 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:01 GMT
	I0830 22:05:01.695975 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:01.696333 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:02.193262 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:02.193289 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:02.193300 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:02.193308 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:02.195902 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:02.195923 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:02.195933 1054224 round_trippers.go:580]     Audit-Id: d1de5876-a659-4fad-88db-6e1c9002a2ef
	I0830 22:05:02.195940 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:02.195947 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:02.195955 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:02.195962 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:02.195968 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:02 GMT
	I0830 22:05:02.196102 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"512","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I0830 22:05:02.693100 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:02.693120 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:02.693152 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:02.693160 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:02.695684 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:02.695709 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:02.695718 1054224 round_trippers.go:580]     Audit-Id: 46aa943d-a7ce-4071-94d2-3ef108449211
	I0830 22:05:02.695725 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:02.695732 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:02.695738 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:02.695745 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:02.695755 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:02 GMT
	I0830 22:05:02.696042 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:03.192919 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:03.192946 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:03.192956 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:03.192964 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:03.195875 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:03.195902 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:03.195915 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:03.195928 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:03.195939 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:03.195946 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:03 GMT
	I0830 22:05:03.195952 1054224 round_trippers.go:580]     Audit-Id: 4f8807a6-ec51-4382-b9b3-5a6473fcee50
	I0830 22:05:03.195959 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:03.196126 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:03.693409 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:03.693432 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:03.693443 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:03.693450 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:03.696119 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:03.696160 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:03.696169 1054224 round_trippers.go:580]     Audit-Id: ead79a3c-35ef-4e6e-b617-ac996de6612a
	I0830 22:05:03.696177 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:03.696183 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:03.696190 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:03.696197 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:03.696203 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:03 GMT
	I0830 22:05:03.696393 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:03.696785 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:04.193026 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:04.193052 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:04.193061 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:04.193069 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:04.195596 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:04.195621 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:04.195630 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:04.195637 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:04.195646 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:04 GMT
	I0830 22:05:04.195654 1054224 round_trippers.go:580]     Audit-Id: d6a01e9f-c9a5-411b-81a8-3d3c23a0ce9b
	I0830 22:05:04.195661 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:04.195667 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:04.195836 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:04.693434 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:04.693483 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:04.693494 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:04.693501 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:04.696049 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:04.696073 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:04.696083 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:04.696091 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:04.696097 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:04.696104 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:04.696111 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:04 GMT
	I0830 22:05:04.696118 1054224 round_trippers.go:580]     Audit-Id: 5c6de266-969f-46b5-ae82-cf5bafe937c8
	I0830 22:05:04.696478 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:05.192809 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:05.192832 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:05.192843 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:05.192850 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:05.196000 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:05:05.196030 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:05.196039 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:05.196047 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:05.196054 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:05 GMT
	I0830 22:05:05.196061 1054224 round_trippers.go:580]     Audit-Id: f7b302ae-3268-4dba-b535-7534b00680e7
	I0830 22:05:05.196069 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:05.196075 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:05.196241 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:05.693045 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:05.693067 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:05.693077 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:05.693084 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:05.695569 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:05.695593 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:05.695602 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:05.695609 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:05.695616 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:05 GMT
	I0830 22:05:05.695623 1054224 round_trippers.go:580]     Audit-Id: 8b965ad6-efe8-4213-8f2b-68709e721870
	I0830 22:05:05.695630 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:05.695640 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:05.695736 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:06.193727 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:06.193751 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:06.193762 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:06.193770 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:06.196359 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:06.196382 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:06.196391 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:06.196401 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:06 GMT
	I0830 22:05:06.196408 1054224 round_trippers.go:580]     Audit-Id: 167623de-b1c9-4609-b091-456be750dce9
	I0830 22:05:06.196414 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:06.196421 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:06.196427 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:06.196563 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:06.196930 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:06.693692 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:06.693715 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:06.693725 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:06.693733 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:06.696181 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:06.696211 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:06.696221 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:06.696228 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:06.696235 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:06.696242 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:06 GMT
	I0830 22:05:06.696249 1054224 round_trippers.go:580]     Audit-Id: 5332154c-68fc-47e4-b844-4702e6d76aa6
	I0830 22:05:06.696258 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:06.696365 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:07.192777 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:07.192802 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:07.192833 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:07.192842 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:07.195510 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:07.195533 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:07.195542 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:07.195549 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:07.195556 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:07.195570 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:07 GMT
	I0830 22:05:07.195590 1054224 round_trippers.go:580]     Audit-Id: 5151a0b6-d526-4883-86ab-9b5fb7e97e26
	I0830 22:05:07.195597 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:07.195726 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:07.693315 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:07.693340 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:07.693351 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:07.693359 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:07.695910 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:07.695935 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:07.695946 1054224 round_trippers.go:580]     Audit-Id: 44fcde22-e2dc-4eda-b8c7-3adbcec13601
	I0830 22:05:07.695953 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:07.695960 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:07.695967 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:07.695974 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:07.695980 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:07 GMT
	I0830 22:05:07.696085 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:08.192822 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:08.192854 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:08.192865 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:08.192872 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:08.195487 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:08.195510 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:08.195519 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:08 GMT
	I0830 22:05:08.195526 1054224 round_trippers.go:580]     Audit-Id: e9675b68-97ae-4bd4-b4a4-6f0c887feb39
	I0830 22:05:08.195533 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:08.195539 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:08.195545 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:08.195552 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:08.195670 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:08.692760 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:08.692788 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:08.692798 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:08.692806 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:08.695348 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:08.695381 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:08.695390 1054224 round_trippers.go:580]     Audit-Id: e98d84b8-613a-4a8b-ad94-09adb5a91220
	I0830 22:05:08.695404 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:08.695411 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:08.695423 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:08.695434 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:08.695454 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:08 GMT
	I0830 22:05:08.695561 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:08.695994 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:09.193771 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:09.193793 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:09.193804 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:09.193812 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:09.196348 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:09.196374 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:09.196383 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:09.196392 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:09.196399 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:09.196406 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:09 GMT
	I0830 22:05:09.196413 1054224 round_trippers.go:580]     Audit-Id: 1e9455f0-29ee-49ea-98f0-e5c0bfe8cda9
	I0830 22:05:09.196420 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:09.196564 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:09.693674 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:09.693696 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:09.693706 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:09.693713 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:09.696181 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:09.696207 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:09.696216 1054224 round_trippers.go:580]     Audit-Id: a0bbdfef-f579-40ed-a807-c2eb6df44390
	I0830 22:05:09.696224 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:09.696231 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:09.696238 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:09.696245 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:09.696255 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:09 GMT
	I0830 22:05:09.696576 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:10.193323 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:10.193347 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:10.193357 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:10.193380 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:10.195870 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:10.195898 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:10.195907 1054224 round_trippers.go:580]     Audit-Id: e5d078ae-9a6d-4a3d-a644-18202f851bd8
	I0830 22:05:10.195915 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:10.195922 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:10.195928 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:10.195936 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:10.195946 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:10 GMT
	I0830 22:05:10.196092 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:10.693515 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:10.693541 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:10.693551 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:10.693559 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:10.696064 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:10.696087 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:10.696096 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:10.696103 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:10.696110 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:10.696119 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:10 GMT
	I0830 22:05:10.696125 1054224 round_trippers.go:580]     Audit-Id: 68c9b99a-323a-44e3-ae2d-1b0a9232669c
	I0830 22:05:10.696132 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:10.696249 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:10.696623 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:11.193408 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:11.193429 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:11.193439 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:11.193454 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:11.196073 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:11.196098 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:11.196107 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:11.196114 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:11 GMT
	I0830 22:05:11.196121 1054224 round_trippers.go:580]     Audit-Id: 533890f1-69ab-4aeb-8d1b-b1591e89cf62
	I0830 22:05:11.196131 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:11.196137 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:11.196148 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:11.196283 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:11.693370 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:11.693392 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:11.693402 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:11.693410 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:11.695887 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:11.695913 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:11.695922 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:11.695929 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:11.695936 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:11.695943 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:11.695951 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:11 GMT
	I0830 22:05:11.695963 1054224 round_trippers.go:580]     Audit-Id: 839679f9-bcff-4a8f-9910-c45937400f19
	I0830 22:05:11.696085 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:12.193287 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:12.193311 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:12.193323 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:12.193330 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:12.195889 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:12.195918 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:12.195929 1054224 round_trippers.go:580]     Audit-Id: 2bba6509-c5aa-4541-bc08-908209944e69
	I0830 22:05:12.195936 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:12.195942 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:12.195949 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:12.195956 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:12.195963 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:12 GMT
	I0830 22:05:12.196111 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:12.692769 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:12.692793 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:12.692803 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:12.692811 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:12.695372 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:12.695393 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:12.695402 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:12 GMT
	I0830 22:05:12.695409 1054224 round_trippers.go:580]     Audit-Id: a1c12b82-bb27-446b-b3e3-44c6ab71e1f0
	I0830 22:05:12.695416 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:12.695422 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:12.695429 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:12.695435 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:12.695550 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:13.193707 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:13.193729 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:13.193738 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:13.193746 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:13.196323 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:13.196344 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:13.196353 1054224 round_trippers.go:580]     Audit-Id: b0774fbb-5f41-4c86-972f-32641472d1d8
	I0830 22:05:13.196360 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:13.196368 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:13.196374 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:13.196381 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:13.196388 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:13 GMT
	I0830 22:05:13.196495 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:13.196868 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:13.693719 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:13.693745 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:13.693756 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:13.693764 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:13.696303 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:13.696337 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:13.696346 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:13.696353 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:13.696360 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:13 GMT
	I0830 22:05:13.696367 1054224 round_trippers.go:580]     Audit-Id: 4e445abc-26ee-485d-957e-c5a0ceb6c90f
	I0830 22:05:13.696374 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:13.696382 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:13.696479 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:14.193633 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:14.193657 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:14.193667 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:14.193674 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:14.196269 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:14.196293 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:14.196302 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:14 GMT
	I0830 22:05:14.196309 1054224 round_trippers.go:580]     Audit-Id: 6c0ea472-6f1d-4fef-9f2d-10f7729a50cf
	I0830 22:05:14.196316 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:14.196322 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:14.196329 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:14.196336 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:14.196758 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:14.692887 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:14.692911 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:14.692920 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:14.692928 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:14.695357 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:14.695380 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:14.695389 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:14.695396 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:14 GMT
	I0830 22:05:14.695402 1054224 round_trippers.go:580]     Audit-Id: ef35aa28-929f-4bbe-9ad1-0fec0036b3ba
	I0830 22:05:14.695409 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:14.695416 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:14.695423 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:14.695532 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:15.193668 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:15.193698 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:15.193708 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:15.193717 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:15.196390 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:15.196419 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:15.196428 1054224 round_trippers.go:580]     Audit-Id: 1ac37a16-9398-4533-a911-fd12316df07b
	I0830 22:05:15.196435 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:15.196441 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:15.196448 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:15.196456 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:15.196469 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:15 GMT
	I0830 22:05:15.196601 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:15.196992 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:15.693730 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:15.693757 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:15.693767 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:15.693774 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:15.696267 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:15.696287 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:15.696296 1054224 round_trippers.go:580]     Audit-Id: eccc056d-9392-4f8c-af00-2b5fec4ada3b
	I0830 22:05:15.696303 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:15.696309 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:15.696315 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:15.696322 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:15.696330 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:15 GMT
	I0830 22:05:15.696495 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:16.193578 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:16.193605 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:16.193616 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:16.193624 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:16.196168 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:16.196191 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:16.196200 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:16.196206 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:16.196213 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:16.196220 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:16 GMT
	I0830 22:05:16.196227 1054224 round_trippers.go:580]     Audit-Id: 53db9ed1-0146-44fa-b431-8fdf9363e471
	I0830 22:05:16.196234 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:16.196368 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:16.693534 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:16.693557 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:16.693569 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:16.693577 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:16.696149 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:16.696173 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:16.696182 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:16 GMT
	I0830 22:05:16.696189 1054224 round_trippers.go:580]     Audit-Id: 4c6a44ef-2e35-4e22-b09e-35d4e19b7283
	I0830 22:05:16.696196 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:16.696203 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:16.696209 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:16.696216 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:16.696322 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:17.193706 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:17.193732 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:17.193742 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:17.193749 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:17.196185 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:17.196212 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:17.196221 1054224 round_trippers.go:580]     Audit-Id: 657d7c3f-0157-4c0a-b69b-4e1ac520bd6b
	I0830 22:05:17.196229 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:17.196235 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:17.196242 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:17.196249 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:17.196260 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:17 GMT
	I0830 22:05:17.196408 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:17.693531 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:17.693554 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:17.693564 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:17.693572 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:17.696085 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:17.696105 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:17.696114 1054224 round_trippers.go:580]     Audit-Id: 78095309-76ff-4261-b68f-07334ff2e3ea
	I0830 22:05:17.696121 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:17.696127 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:17.696134 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:17.696141 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:17.696148 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:17 GMT
	I0830 22:05:17.696287 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:17.696662 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:18.193505 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:18.193527 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:18.193537 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:18.193545 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:18.196012 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:18.196040 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:18.196049 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:18 GMT
	I0830 22:05:18.196056 1054224 round_trippers.go:580]     Audit-Id: 980fa50d-1300-4c95-bfab-a6c035506e4d
	I0830 22:05:18.196063 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:18.196070 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:18.196083 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:18.196090 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:18.196206 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:18.693276 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:18.693297 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:18.693308 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:18.693316 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:18.696119 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:18.696158 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:18.696168 1054224 round_trippers.go:580]     Audit-Id: 4b22f3b1-75f1-4f10-bc99-edf5e32db2d2
	I0830 22:05:18.696175 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:18.696182 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:18.696189 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:18.696196 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:18.696203 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:18 GMT
	I0830 22:05:18.696297 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:19.192732 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:19.192762 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:19.192772 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:19.192780 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:19.195569 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:19.195590 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:19.195598 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:19.195605 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:19.195612 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:19 GMT
	I0830 22:05:19.195619 1054224 round_trippers.go:580]     Audit-Id: 900665ac-feda-492f-a7bf-89e4f8fb54e5
	I0830 22:05:19.195626 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:19.195632 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:19.195745 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:19.693430 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:19.693454 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:19.693464 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:19.693471 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:19.696138 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:19.696167 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:19.696176 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:19.696183 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:19.696192 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:19 GMT
	I0830 22:05:19.696199 1054224 round_trippers.go:580]     Audit-Id: 3915527e-f299-4e60-ab46-4750f5e76980
	I0830 22:05:19.696207 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:19.696214 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:19.696327 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:19.696713 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:20.193480 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:20.193507 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:20.193518 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:20.193526 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:20.196352 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:20.196379 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:20.196389 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:20.196395 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:20.196402 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:20.196410 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:20 GMT
	I0830 22:05:20.196417 1054224 round_trippers.go:580]     Audit-Id: 2c106568-ba6e-47e2-a0a7-25a0a751a523
	I0830 22:05:20.196425 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:20.196547 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:20.693752 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:20.693777 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:20.693786 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:20.693794 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:20.696212 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:20.696236 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:20.696246 1054224 round_trippers.go:580]     Audit-Id: ac399444-a78c-4189-9b30-e16b71d04131
	I0830 22:05:20.696253 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:20.696260 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:20.696267 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:20.696274 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:20.696284 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:20 GMT
	I0830 22:05:20.696426 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:21.193570 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:21.193595 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:21.193606 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:21.193614 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:21.196090 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:21.196116 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:21.196126 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:21 GMT
	I0830 22:05:21.196133 1054224 round_trippers.go:580]     Audit-Id: 412b1bf8-3e7e-43a3-b5b0-290e5098c412
	I0830 22:05:21.196139 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:21.196146 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:21.196153 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:21.196166 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:21.196416 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:21.693271 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:21.693290 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:21.693300 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:21.693307 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:21.696080 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:21.696104 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:21.696113 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:21.696120 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:21 GMT
	I0830 22:05:21.696127 1054224 round_trippers.go:580]     Audit-Id: 5162c428-ed12-45d9-a19b-1d5cea5c6b90
	I0830 22:05:21.696133 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:21.696140 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:21.696146 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:21.696452 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:21.696841 1054224 node_ready.go:58] node "multinode-994875-m02" has status "Ready":"False"
	I0830 22:05:22.193231 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:22.193256 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:22.193268 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:22.193275 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:22.195843 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:22.195871 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:22.195880 1054224 round_trippers.go:580]     Audit-Id: 970e500c-4ca2-4768-bafd-e3e97b70a944
	I0830 22:05:22.195887 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:22.195894 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:22.195900 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:22.195907 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:22.195915 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:22 GMT
	I0830 22:05:22.196051 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:22.693158 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:22.693182 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:22.693192 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:22.693200 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:22.695555 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:22.695578 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:22.695587 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:22.695594 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:22.695601 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:22.695608 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:22 GMT
	I0830 22:05:22.695620 1054224 round_trippers.go:580]     Audit-Id: c78f0aaf-28a6-4a65-bc9c-efe7ae65809c
	I0830 22:05:22.695629 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:22.695955 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:23.192756 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:23.192778 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.192788 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.192796 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.195346 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.195368 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.195376 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.195383 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.195390 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.195396 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.195403 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.195410 1054224 round_trippers.go:580]     Audit-Id: e8b1cf21-70be-439c-9d87-b2ae5b5ec363
	I0830 22:05:23.195608 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"520","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I0830 22:05:23.693352 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:23.693377 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.693388 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.693396 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.696142 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.696172 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.696181 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.696190 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.696197 1054224 round_trippers.go:580]     Audit-Id: d13e3a2b-6d04-4410-8985-000ab21d2d15
	I0830 22:05:23.696204 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.696210 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.696218 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.696341 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"542","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0830 22:05:23.696710 1054224 node_ready.go:49] node "multinode-994875-m02" has status "Ready":"True"
	I0830 22:05:23.696727 1054224 node_ready.go:38] duration metric: took 31.010545331s waiting for node "multinode-994875-m02" to be "Ready" ...
	I0830 22:05:23.696736 1054224 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:05:23.696800 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0830 22:05:23.696810 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.696818 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.696826 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.700490 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:05:23.700523 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.700534 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.700541 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.700549 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.700556 1054224 round_trippers.go:580]     Audit-Id: 5070a651-fba3-4180-8184-2c02d08304b4
	I0830 22:05:23.700563 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.700570 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.702086 1054224 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"453","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0830 22:05:23.705097 1054224 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.705247 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-24ps6
	I0830 22:05:23.705256 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.705266 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.705273 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.707816 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.707839 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.707850 1054224 round_trippers.go:580]     Audit-Id: e7834244-3526-4297-b358-3a70c33c1ba3
	I0830 22:05:23.707858 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.707865 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.707873 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.707883 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.707890 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.708311 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-24ps6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117","resourceVersion":"453","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"8f852740-67c6-4703-9481-742a0860e84e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8f852740-67c6-4703-9481-742a0860e84e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0830 22:05:23.708875 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:23.708885 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.708893 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.708900 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.711373 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.711393 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.711407 1054224 round_trippers.go:580]     Audit-Id: ea85a030-03b5-4e85-9a02-5af2dce6bcf9
	I0830 22:05:23.711416 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.711422 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.711429 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.711436 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.711443 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.711824 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:23.712236 1054224 pod_ready.go:92] pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:23.712254 1054224 pod_ready.go:81] duration metric: took 7.127205ms waiting for pod "coredns-5dd5756b68-24ps6" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.712268 1054224 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.712327 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-994875
	I0830 22:05:23.712336 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.712345 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.712354 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.714788 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.714825 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.714833 1054224 round_trippers.go:580]     Audit-Id: 9b1f838f-5b55-4bd4-b4b7-629e9a379ff0
	I0830 22:05:23.714840 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.714860 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.714886 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.714898 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.714905 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.715016 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-994875","namespace":"kube-system","uid":"3a724c5d-4cbe-4740-a64d-883f1859d257","resourceVersion":"427","creationTimestamp":"2023-08-30T22:03:45Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"6839f74d8b9c4802e53664200913f5de","kubernetes.io/config.mirror":"6839f74d8b9c4802e53664200913f5de","kubernetes.io/config.seen":"2023-08-30T22:03:39.081064328Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0830 22:05:23.715495 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:23.715511 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.715520 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.715542 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.717852 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.717874 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.717882 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.717889 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.717897 1054224 round_trippers.go:580]     Audit-Id: 0289ab1e-a17b-4fdd-aaa7-bb3d775c8eed
	I0830 22:05:23.717904 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.717914 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.717926 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.718222 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:23.718638 1054224 pod_ready.go:92] pod "etcd-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:23.718654 1054224 pod_ready.go:81] duration metric: took 6.379439ms waiting for pod "etcd-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.718670 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.718733 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-994875
	I0830 22:05:23.718743 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.718752 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.718759 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.721123 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.721192 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.721230 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.721253 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.721272 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.721291 1054224 round_trippers.go:580]     Audit-Id: 20513349-ef6c-4f16-bc27-481795a23cd7
	I0830 22:05:23.721333 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.721347 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.721484 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-994875","namespace":"kube-system","uid":"1a58b1f6-9e2b-438e-b54b-d23f7804e728","resourceVersion":"424","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ffe9011f46196e6529dac903c5aa8d04","kubernetes.io/config.mirror":"ffe9011f46196e6529dac903c5aa8d04","kubernetes.io/config.seen":"2023-08-30T22:03:47.448536926Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0830 22:05:23.722031 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:23.722045 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.722055 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.722062 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.724324 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.724347 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.724357 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.724364 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.724375 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.724382 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.724399 1054224 round_trippers.go:580]     Audit-Id: 1d525915-f92a-448d-a85f-e21452ca9bcd
	I0830 22:05:23.724408 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.724505 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:23.724887 1054224 pod_ready.go:92] pod "kube-apiserver-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:23.724908 1054224 pod_ready.go:81] duration metric: took 6.225077ms waiting for pod "kube-apiserver-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.724919 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.724980 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-994875
	I0830 22:05:23.724989 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.724997 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.725005 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.727392 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.727423 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.727432 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.727439 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.727446 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.727453 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.727463 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.727477 1054224 round_trippers.go:580]     Audit-Id: 02c05db7-51dd-4de4-899e-bf1effd08ca1
	I0830 22:05:23.727609 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-994875","namespace":"kube-system","uid":"ad217ba1-265c-477a-985c-9d9c21b976b8","resourceVersion":"425","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fdb61cdf64af05dc324b859049e23cf5","kubernetes.io/config.mirror":"fdb61cdf64af05dc324b859049e23cf5","kubernetes.io/config.seen":"2023-08-30T22:03:47.448538353Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0830 22:05:23.728139 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:23.728158 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.728167 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.728175 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.730510 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.730535 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.730544 1054224 round_trippers.go:580]     Audit-Id: e3ed1aa0-7500-404d-9cb4-738b94637e88
	I0830 22:05:23.730552 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.730559 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.730574 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.730580 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.730588 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.730974 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:23.731362 1054224 pod_ready.go:92] pod "kube-controller-manager-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:23.731379 1054224 pod_ready.go:81] duration metric: took 6.448502ms waiting for pod "kube-controller-manager-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.731392 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-74kdv" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:23.893825 1054224 request.go:629] Waited for 162.3415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74kdv
	I0830 22:05:23.893934 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74kdv
	I0830 22:05:23.893948 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:23.893958 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:23.893966 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:23.896677 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:23.896705 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:23.896721 1054224 round_trippers.go:580]     Audit-Id: 6ecf883a-d969-4316-8432-7c2e95aa6493
	I0830 22:05:23.896728 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:23.896735 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:23.896759 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:23.896775 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:23.896782 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:23 GMT
	I0830 22:05:23.896933 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-74kdv","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a3f4710-2aa0-427f-9638-e7ec8cc4f280","resourceVersion":"506","creationTimestamp":"2023-08-30T22:04:52Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aa74bb31-0475-4a10-acfb-8825232ed9aa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aa74bb31-0475-4a10-acfb-8825232ed9aa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0830 22:05:24.093917 1054224 request.go:629] Waited for 196.355216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:24.093997 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875-m02
	I0830 22:05:24.094006 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:24.094016 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:24.094028 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:24.096640 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:24.096666 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:24.096675 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:24.096681 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:24.096688 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:24.096695 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:24.096701 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:24 GMT
	I0830 22:05:24.096709 1054224 round_trippers.go:580]     Audit-Id: 9e60bbe6-c205-4c40-aa49-169537835e11
	I0830 22:05:24.096820 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875-m02","uid":"d788253e-db7b-4b60-9b7b-a405c24aeeeb","resourceVersion":"542","creationTimestamp":"2023-08-30T22:04:51Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I0830 22:05:24.097266 1054224 pod_ready.go:92] pod "kube-proxy-74kdv" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:24.097285 1054224 pod_ready.go:81] duration metric: took 365.883737ms waiting for pod "kube-proxy-74kdv" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:24.097296 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dn6c5" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:24.293673 1054224 request.go:629] Waited for 196.315397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn6c5
	I0830 22:05:24.293759 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn6c5
	I0830 22:05:24.293772 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:24.293782 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:24.293790 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:24.296359 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:24.296383 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:24.296392 1054224 round_trippers.go:580]     Audit-Id: 460bff0e-51d5-4d10-b15d-54b5ed5742f2
	I0830 22:05:24.296401 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:24.296408 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:24.296414 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:24.296429 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:24.296440 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:24 GMT
	I0830 22:05:24.296553 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dn6c5","generateName":"kube-proxy-","namespace":"kube-system","uid":"1ca7b9ca-0dca-404a-a450-5c05dee3e137","resourceVersion":"409","creationTimestamp":"2023-08-30T22:04:00Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aa74bb31-0475-4a10-acfb-8825232ed9aa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:04:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aa74bb31-0475-4a10-acfb-8825232ed9aa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0830 22:05:24.493313 1054224 request.go:629] Waited for 196.262203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:24.493399 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:24.493411 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:24.493421 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:24.493432 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:24.496216 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:24.496241 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:24.496250 1054224 round_trippers.go:580]     Audit-Id: e58c7f7a-275b-4a1f-8504-034088156023
	I0830 22:05:24.496257 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:24.496263 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:24.496270 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:24.496277 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:24.496284 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:24 GMT
	I0830 22:05:24.496404 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:24.496840 1054224 pod_ready.go:92] pod "kube-proxy-dn6c5" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:24.496852 1054224 pod_ready.go:81] duration metric: took 399.550647ms waiting for pod "kube-proxy-dn6c5" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:24.496864 1054224 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:24.694306 1054224 request.go:629] Waited for 197.375745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994875
	I0830 22:05:24.694417 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994875
	I0830 22:05:24.694429 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:24.694439 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:24.694450 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:24.697170 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:24.697345 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:24.697354 1054224 round_trippers.go:580]     Audit-Id: 96312466-789e-4fd9-9d01-60bc4adb7693
	I0830 22:05:24.697362 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:24.697368 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:24.697375 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:24.697382 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:24.697389 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:24 GMT
	I0830 22:05:24.697842 1054224 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-994875","namespace":"kube-system","uid":"baf097b2-79dd-4619-b805-5dcf6403427a","resourceVersion":"426","creationTimestamp":"2023-08-30T22:03:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c56505fe72053c740eb16de681f4dc4","kubernetes.io/config.mirror":"0c56505fe72053c740eb16de681f4dc4","kubernetes.io/config.seen":"2023-08-30T22:03:47.448539174Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-30T22:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0830 22:05:24.893635 1054224 request.go:629] Waited for 195.346413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:24.893710 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994875
	I0830 22:05:24.893715 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:24.893731 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:24.893742 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:24.896299 1054224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0830 22:05:24.896326 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:24.896335 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:24.896343 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:24.896349 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:24.896360 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:24 GMT
	I0830 22:05:24.896372 1054224 round_trippers.go:580]     Audit-Id: bc0d2c98-bf61-406f-9f8d-ae7887498612
	I0830 22:05:24.896379 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:24.896547 1054224 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-30T22:03:43Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0830 22:05:24.896944 1054224 pod_ready.go:92] pod "kube-scheduler-multinode-994875" in "kube-system" namespace has status "Ready":"True"
	I0830 22:05:24.896961 1054224 pod_ready.go:81] duration metric: took 400.090872ms waiting for pod "kube-scheduler-multinode-994875" in "kube-system" namespace to be "Ready" ...
	I0830 22:05:24.896972 1054224 pod_ready.go:38] duration metric: took 1.200226053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:05:24.896988 1054224 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:05:24.897043 1054224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:05:24.911414 1054224 system_svc.go:56] duration metric: took 14.415033ms WaitForService to wait for kubelet.
	I0830 22:05:24.911443 1054224 kubeadm.go:581] duration metric: took 32.246420313s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:05:24.911466 1054224 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:05:25.093916 1054224 request.go:629] Waited for 182.370353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0830 22:05:25.093985 1054224 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0830 22:05:25.093992 1054224 round_trippers.go:469] Request Headers:
	I0830 22:05:25.094001 1054224 round_trippers.go:473]     Accept: application/json, */*
	I0830 22:05:25.094009 1054224 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0830 22:05:25.097244 1054224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0830 22:05:25.097270 1054224 round_trippers.go:577] Response Headers:
	I0830 22:05:25.097280 1054224 round_trippers.go:580]     Audit-Id: 57792d31-a5aa-402e-b631-76fa976a69e3
	I0830 22:05:25.097289 1054224 round_trippers.go:580]     Cache-Control: no-cache, private
	I0830 22:05:25.097296 1054224 round_trippers.go:580]     Content-Type: application/json
	I0830 22:05:25.097303 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0aec9b58-6ca1-48dd-9f15-13e516e373bb
	I0830 22:05:25.097310 1054224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6199e093-14eb-4130-b9e2-0a0abf63e25e
	I0830 22:05:25.097317 1054224 round_trippers.go:580]     Date: Wed, 30 Aug 2023 22:05:25 GMT
	I0830 22:05:25.097934 1054224 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"543"},"items":[{"metadata":{"name":"multinode-994875","uid":"5de22c22-429c-4aeb-b557-23554b80e1fc","resourceVersion":"437","creationTimestamp":"2023-08-30T22:03:43Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994875","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d7e60a4db8510b81002db541520f138fed781588","minikube.k8s.io/name":"multinode-994875","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_30T22_03_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0830 22:05:25.098668 1054224 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 22:05:25.098690 1054224 node_conditions.go:123] node cpu capacity is 2
	I0830 22:05:25.098703 1054224 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 22:05:25.098708 1054224 node_conditions.go:123] node cpu capacity is 2
	I0830 22:05:25.098713 1054224 node_conditions.go:105] duration metric: took 187.242471ms to run NodePressure ...
	I0830 22:05:25.098724 1054224 start.go:228] waiting for startup goroutines ...
	I0830 22:05:25.098750 1054224 start.go:242] writing updated cluster config ...
	I0830 22:05:25.099107 1054224 ssh_runner.go:195] Run: rm -f paused
	I0830 22:05:25.174392 1054224 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:05:25.177948 1054224 out.go:177] * Done! kubectl is now configured to use "multinode-994875" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 30 22:04:32 multinode-994875 crio[894]: time="2023-08-30 22:04:32.732691920Z" level=info msg="Starting container: 6b2424ab232190c731664fda582ba0697f90053a330d5c03d54331e0255a8d9a" id=f24039ca-5054-423f-8df1-44d51e77f4c3 name=/runtime.v1.RuntimeService/StartContainer
	Aug 30 22:04:32 multinode-994875 crio[894]: time="2023-08-30 22:04:32.741116444Z" level=info msg="Created container 7f8adc4eabc08714692fda7d92dee9487a7f37bfeb98ff593fde202839e52c13: kube-system/coredns-5dd5756b68-24ps6/coredns" id=51c684de-3d1c-4103-81de-293907b957b9 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 30 22:04:32 multinode-994875 crio[894]: time="2023-08-30 22:04:32.741896702Z" level=info msg="Starting container: 7f8adc4eabc08714692fda7d92dee9487a7f37bfeb98ff593fde202839e52c13" id=765297b0-234e-4973-b363-54947f17d915 name=/runtime.v1.RuntimeService/StartContainer
	Aug 30 22:04:32 multinode-994875 crio[894]: time="2023-08-30 22:04:32.752177553Z" level=info msg="Started container" PID=1935 containerID=6b2424ab232190c731664fda582ba0697f90053a330d5c03d54331e0255a8d9a description=kube-system/storage-provisioner/storage-provisioner id=f24039ca-5054-423f-8df1-44d51e77f4c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa29d8d72cc1684fe84bbe778140a36b6754fb44414c5d0c40b91d53ee7e795e
	Aug 30 22:04:32 multinode-994875 crio[894]: time="2023-08-30 22:04:32.753459718Z" level=info msg="Started container" PID=1943 containerID=7f8adc4eabc08714692fda7d92dee9487a7f37bfeb98ff593fde202839e52c13 description=kube-system/coredns-5dd5756b68-24ps6/coredns id=765297b0-234e-4973-b363-54947f17d915 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b28910d0cbf5b691c38f4692adcfe9aaded1564876d363036055cad44c671297
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.368348219Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-rdfhb/POD" id=232380c6-6726-42b5-ba18-b14eb36da330 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.368412982Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.389406659Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-rdfhb Namespace:default ID:bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c UID:41d04fb7-c258-4243-8f8b-10c26e7456ba NetNS:/var/run/netns/44e40acb-ab21-4926-9eb1-7d616ccfeb33 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.389467015Z" level=info msg="Adding pod default_busybox-5bc68d56bd-rdfhb to CNI network \"kindnet\" (type=ptp)"
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.399234505Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-rdfhb Namespace:default ID:bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c UID:41d04fb7-c258-4243-8f8b-10c26e7456ba NetNS:/var/run/netns/44e40acb-ab21-4926-9eb1-7d616ccfeb33 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.399394529Z" level=info msg="Checking pod default_busybox-5bc68d56bd-rdfhb for CNI network kindnet (type=ptp)"
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.411235263Z" level=info msg="Ran pod sandbox bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c with infra container: default/busybox-5bc68d56bd-rdfhb/POD" id=232380c6-6726-42b5-ba18-b14eb36da330 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.412153490Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=c794f871-a9ec-459c-82b9-8750deba7e46 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.412364321Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=c794f871-a9ec-459c-82b9-8750deba7e46 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.413381682Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=b178365e-3044-43e7-8871-b365b921cf76 name=/runtime.v1.ImageService/PullImage
	Aug 30 22:05:26 multinode-994875 crio[894]: time="2023-08-30 22:05:26.415251218Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 30 22:05:27 multinode-994875 crio[894]: time="2023-08-30 22:05:27.042345324Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.508328391Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=b178365e-3044-43e7-8871-b365b921cf76 name=/runtime.v1.ImageService/PullImage
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.509726929Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=354e21b2-2b66-4a13-a8f1-814cb766a029 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.510551182Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=354e21b2-2b66-4a13-a8f1-814cb766a029 name=/runtime.v1.ImageService/ImageStatus
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.512154471Z" level=info msg="Creating container: default/busybox-5bc68d56bd-rdfhb/busybox" id=827189bb-dee5-4f1b-9217-fa8f6e6723e6 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.512269679Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.598053409Z" level=info msg="Created container 55e1e9cd3863f49ccf99030dbd5a174c774fa923d3992a23fa465daa15ee2aa0: default/busybox-5bc68d56bd-rdfhb/busybox" id=827189bb-dee5-4f1b-9217-fa8f6e6723e6 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.600083610Z" level=info msg="Starting container: 55e1e9cd3863f49ccf99030dbd5a174c774fa923d3992a23fa465daa15ee2aa0" id=97592257-1153-489d-8a3d-d147ab38f9d5 name=/runtime.v1.RuntimeService/StartContainer
	Aug 30 22:05:28 multinode-994875 crio[894]: time="2023-08-30 22:05:28.608653357Z" level=info msg="Started container" PID=2089 containerID=55e1e9cd3863f49ccf99030dbd5a174c774fa923d3992a23fa465daa15ee2aa0 description=default/busybox-5bc68d56bd-rdfhb/busybox id=97592257-1153-489d-8a3d-d147ab38f9d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	55e1e9cd3863f       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   bf7b2fc766611       busybox-5bc68d56bd-rdfhb
	7f8adc4eabc08       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   b28910d0cbf5b       coredns-5dd5756b68-24ps6
	6b2424ab23219       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   fa29d8d72cc16       storage-provisioner
	150efdfa0a62a       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   c13482f27908c       kindnet-gdfw4
	d08b0da71228b       812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26                                      About a minute ago   Running             kube-proxy                0                   901ee041d4a22       kube-proxy-dn6c5
	b2c57e4290579       8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965                                      About a minute ago   Running             kube-controller-manager   0                   c9c990dfe048c       kube-controller-manager-multinode-994875
	33ff29896ee11       b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87                                      About a minute ago   Running             kube-scheduler            0                   1e12baaed7a95       kube-scheduler-multinode-994875
	cde2637214441       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   bcb0d49914d04       etcd-multinode-994875
	76cc080627497       b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a                                      About a minute ago   Running             kube-apiserver            0                   5431f8d0355b0       kube-apiserver-multinode-994875
	
	* 
	* ==> coredns [7f8adc4eabc08714692fda7d92dee9487a7f37bfeb98ff593fde202839e52c13] <==
	* [INFO] 10.244.1.2:38651 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115044s
	[INFO] 10.244.0.3:58131 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119877s
	[INFO] 10.244.0.3:48531 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001079523s
	[INFO] 10.244.0.3:57940 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086515s
	[INFO] 10.244.0.3:49151 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049337s
	[INFO] 10.244.0.3:59590 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000906527s
	[INFO] 10.244.0.3:44612 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082084s
	[INFO] 10.244.0.3:48704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006048s
	[INFO] 10.244.0.3:49998 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060299s
	[INFO] 10.244.1.2:56830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100086s
	[INFO] 10.244.1.2:40678 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091176s
	[INFO] 10.244.1.2:33310 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064336s
	[INFO] 10.244.1.2:57681 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062343s
	[INFO] 10.244.0.3:44911 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010002s
	[INFO] 10.244.0.3:47037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070031s
	[INFO] 10.244.0.3:54052 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056385s
	[INFO] 10.244.0.3:47247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050733s
	[INFO] 10.244.1.2:55173 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172225s
	[INFO] 10.244.1.2:47692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193813s
	[INFO] 10.244.1.2:54989 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118736s
	[INFO] 10.244.1.2:57691 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097132s
	[INFO] 10.244.0.3:54862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119466s
	[INFO] 10.244.0.3:51394 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082289s
	[INFO] 10.244.0.3:40501 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000054613s
	[INFO] 10.244.0.3:49615 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056919s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-994875
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-994875
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7e60a4db8510b81002db541520f138fed781588
	                    minikube.k8s.io/name=multinode-994875
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_03_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:03:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994875
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:05:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:04:32 +0000   Wed, 30 Aug 2023 22:03:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:04:32 +0000   Wed, 30 Aug 2023 22:03:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:04:32 +0000   Wed, 30 Aug 2023 22:03:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:04:32 +0000   Wed, 30 Aug 2023 22:04:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-994875
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	System Info:
	  Machine ID:                 0341fd41653246e4ab7b3e03144ff7ef
	  System UUID:                34516b7e-665e-447c-be5f-f01c97bad384
	  Boot ID:                    98673563-8173-4281-afb4-eac1dfafdc23
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rdfhb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-24ps6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     94s
	  kube-system                 etcd-multinode-994875                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-gdfw4                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      94s
	  kube-system                 kube-apiserver-multinode-994875             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-multinode-994875    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-dn6c5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-multinode-994875             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node multinode-994875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node multinode-994875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node multinode-994875 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node multinode-994875 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node multinode-994875 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node multinode-994875 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s                  node-controller  Node multinode-994875 event: Registered Node multinode-994875 in Controller
	  Normal  NodeReady                62s                  kubelet          Node multinode-994875 status is now: NodeReady
	
	
	Name:               multinode-994875-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-994875-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994875-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:05:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:05:23 +0000   Wed, 30 Aug 2023 22:04:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:05:23 +0000   Wed, 30 Aug 2023 22:04:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:05:23 +0000   Wed, 30 Aug 2023 22:04:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:05:23 +0000   Wed, 30 Aug 2023 22:05:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-994875-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022572Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3ee5dca88c34813810220fb0efc4075
	  System UUID:                3bf04a64-081a-43e1-9259-60b9c8a01219
	  Boot ID:                    98673563-8173-4281-afb4-eac1dfafdc23
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8gn7x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-67zxx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-74kdv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-994875-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-994875-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-994875-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node multinode-994875-m02 event: Registered Node multinode-994875-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-994875-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001068] FS-Cache: O-key=[8] 'a53f5c0100000000'
	[  +0.000743] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=0000000052a3ffac
	[  +0.001181] FS-Cache: N-key=[8] 'a53f5c0100000000'
	[  +0.003620] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000989] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000cfc10e18
	[  +0.001078] FS-Cache: O-key=[8] 'a53f5c0100000000'
	[  +0.000892] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000ec866464
	[  +0.001154] FS-Cache: N-key=[8] 'a53f5c0100000000'
	[  +3.285800] FS-Cache: Duplicate cookie detected
	[  +0.000913] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001105] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=00000000185770a2
	[  +0.001225] FS-Cache: O-key=[8] 'a43f5c0100000000'
	[  +0.000833] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001080] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=000000006d053276
	[  +0.001194] FS-Cache: N-key=[8] 'a43f5c0100000000'
	[  +0.414572] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000967] FS-Cache: O-cookie d=00000000d8a48a2b{9p.inode} n=000000001a64e3e4
	[  +0.001029] FS-Cache: O-key=[8] 'aa3f5c0100000000'
	[  +0.000731] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000d8a48a2b{9p.inode} n=00000000a41d18fb
	[  +0.001092] FS-Cache: N-key=[8] 'aa3f5c0100000000'
	
	* 
	* ==> etcd [cde2637214441cb06cb885bc686ed2ad7c60130c15f6f348539f9a49545b06c7] <==
	* {"level":"info","ts":"2023-08-30T22:03:40.022839Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-30T22:03:40.023203Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-30T22:03:40.023265Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-30T22:03:40.023803Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-30T22:03:40.02398Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-30T22:03:40.024187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-08-30T22:03:40.024343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-30T22:03:40.777179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-30T22:03:40.777301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-30T22:03:40.777341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-30T22:03:40.777406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:03:40.777439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-30T22:03:40.777484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-30T22:03:40.777521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-30T22:03:40.781283Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:03:40.7854Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-994875 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:03:40.785489Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:03:40.78669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-30T22:03:40.786908Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:03:40.787815Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T22:03:40.797578Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:03:40.797657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-30T22:03:40.797846Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:03:40.797957Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:03:40.798024Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  22:05:34 up  6:47,  0 users,  load average: 0.89, 1.71, 1.76
	Linux multinode-994875 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [150efdfa0a62a7baa48759490356dda182e6e6d7b6bbf45c7cc6ed5a19159266] <==
	* I0830 22:04:01.550483       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0830 22:04:31.710986       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0830 22:04:31.725113       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:04:31.725173       1 main.go:227] handling current node
	I0830 22:04:41.741835       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:04:41.742158       1 main.go:227] handling current node
	I0830 22:04:51.753486       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:04:51.753515       1 main.go:227] handling current node
	I0830 22:05:01.758519       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:05:01.758552       1 main.go:227] handling current node
	I0830 22:05:01.758564       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0830 22:05:01.758570       1 main.go:250] Node multinode-994875-m02 has CIDR [10.244.1.0/24] 
	I0830 22:05:01.758712       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0830 22:05:11.774145       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:05:11.774175       1 main.go:227] handling current node
	I0830 22:05:11.774185       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0830 22:05:11.774191       1 main.go:250] Node multinode-994875-m02 has CIDR [10.244.1.0/24] 
	I0830 22:05:21.782247       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:05:21.782277       1 main.go:227] handling current node
	I0830 22:05:21.782288       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0830 22:05:21.782294       1 main.go:250] Node multinode-994875-m02 has CIDR [10.244.1.0/24] 
	I0830 22:05:31.793190       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0830 22:05:31.793224       1 main.go:227] handling current node
	I0830 22:05:31.793235       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0830 22:05:31.793241       1 main.go:250] Node multinode-994875-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [76cc08062749798806c8c2383e99681ab6e69799e3a50f58adfcb6b432a42c92] <==
	* I0830 22:03:43.953752       1 cache.go:39] Caches are synced for autoregister controller
	I0830 22:03:43.964039       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 22:03:43.964683       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0830 22:03:43.964776       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 22:03:43.965783       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0830 22:03:43.965863       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0830 22:03:43.965903       1 shared_informer.go:318] Caches are synced for configmaps
	I0830 22:03:43.968172       1 controller.go:624] quota admission added evaluator for: namespaces
	E0830 22:03:43.989707       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0830 22:03:44.192767       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 22:03:44.769988       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0830 22:03:44.775253       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0830 22:03:44.775277       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0830 22:03:45.438864       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 22:03:45.486062       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0830 22:03:45.607722       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0830 22:03:45.619270       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0830 22:03:45.620650       1 controller.go:624] quota admission added evaluator for: endpoints
	I0830 22:03:45.625995       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 22:03:45.910669       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0830 22:03:47.357257       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0830 22:03:47.374932       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0830 22:03:47.390180       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0830 22:04:00.366858       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0830 22:04:00.390729       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [b2c57e4290579e050be0c8668783e0f06f70123df652f2790f3e42f7fba0d17e] <==
	* I0830 22:04:00.959367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.738959ms"
	I0830 22:04:00.960696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.326µs"
	I0830 22:04:32.268403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.803µs"
	I0830 22:04:32.285731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.524µs"
	I0830 22:04:33.691692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.045979ms"
	I0830 22:04:33.691785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.026µs"
	I0830 22:04:35.111967       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0830 22:04:51.978267       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994875-m02\" does not exist"
	I0830 22:04:52.006870       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994875-m02" podCIDRs=["10.244.1.0/24"]
	I0830 22:04:52.015363       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-67zxx"
	I0830 22:04:52.015597       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-74kdv"
	I0830 22:04:55.114527       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-994875-m02"
	I0830 22:04:55.114690       1 event.go:307] "Event occurred" object="multinode-994875-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994875-m02 event: Registered Node multinode-994875-m02 in Controller"
	I0830 22:05:23.337101       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-994875-m02"
	I0830 22:05:25.985887       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0830 22:05:26.025418       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8gn7x"
	I0830 22:05:26.039373       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rdfhb"
	I0830 22:05:26.068414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="82.258318ms"
	I0830 22:05:26.097721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.493255ms"
	I0830 22:05:26.114570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.787412ms"
	I0830 22:05:26.114701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.078µs"
	I0830 22:05:28.776307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.002872ms"
	I0830 22:05:28.776378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.001µs"
	I0830 22:05:29.638093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.514783ms"
	I0830 22:05:29.638161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.511µs"
	
	* 
	* ==> kube-proxy [d08b0da71228b4382ec3c09da750239ed2f4132e21ec8d9dfa4b4d5a43cc3f1c] <==
	* I0830 22:04:01.314886       1 server_others.go:69] "Using iptables proxy"
	I0830 22:04:01.366364       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0830 22:04:01.638490       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0830 22:04:01.643765       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:04:01.643906       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0830 22:04:01.643941       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0830 22:04:01.644055       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:04:01.644315       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:04:01.644373       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:04:01.650928       1 config.go:188] "Starting service config controller"
	I0830 22:04:01.651046       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:04:01.651107       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:04:01.651134       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:04:01.651748       1 config.go:315] "Starting node config controller"
	I0830 22:04:01.651821       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:04:01.752122       1 shared_informer.go:318] Caches are synced for node config
	I0830 22:04:01.752156       1 shared_informer.go:318] Caches are synced for service config
	I0830 22:04:01.752184       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [33ff29896ee11bdb9a220fc789d6a0fb1863e2a535a1abb6edba6cb1655bb80e] <==
	* W0830 22:03:43.945483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:03:43.945547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 22:03:43.945587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 22:03:43.945607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0830 22:03:43.945554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:03:43.945623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 22:03:43.945429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:03:43.945646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 22:03:43.945521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:03:43.945661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 22:03:43.945739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:03:43.945793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0830 22:03:44.808307       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 22:03:44.808430       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0830 22:03:45.051566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:03:45.051616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 22:03:45.059806       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 22:03:45.059843       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:03:45.134476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:03:45.134521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 22:03:45.159259       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 22:03:45.159314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0830 22:03:45.173689       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 22:03:45.173744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0830 22:03:47.933854       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.482230    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ca7b9ca-0dca-404a-a450-5c05dee3e137-xtables-lock\") pod \"kube-proxy-dn6c5\" (UID: \"1ca7b9ca-0dca-404a-a450-5c05dee3e137\") " pod="kube-system/kube-proxy-dn6c5"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.482275    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ca7b9ca-0dca-404a-a450-5c05dee3e137-kube-proxy\") pod \"kube-proxy-dn6c5\" (UID: \"1ca7b9ca-0dca-404a-a450-5c05dee3e137\") " pod="kube-system/kube-proxy-dn6c5"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.482304    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ca7b9ca-0dca-404a-a450-5c05dee3e137-lib-modules\") pod \"kube-proxy-dn6c5\" (UID: \"1ca7b9ca-0dca-404a-a450-5c05dee3e137\") " pod="kube-system/kube-proxy-dn6c5"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.482330    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbzzp\" (UniqueName: \"kubernetes.io/projected/1ca7b9ca-0dca-404a-a450-5c05dee3e137-kube-api-access-pbzzp\") pod \"kube-proxy-dn6c5\" (UID: \"1ca7b9ca-0dca-404a-a450-5c05dee3e137\") " pod="kube-system/kube-proxy-dn6c5"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.500467    1383 topology_manager.go:215] "Topology Admit Handler" podUID="375a0b4c-8f52-4769-83d0-7b723290fac2" podNamespace="kube-system" podName="kindnet-gdfw4"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.582871    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/375a0b4c-8f52-4769-83d0-7b723290fac2-lib-modules\") pod \"kindnet-gdfw4\" (UID: \"375a0b4c-8f52-4769-83d0-7b723290fac2\") " pod="kube-system/kindnet-gdfw4"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.582961    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/375a0b4c-8f52-4769-83d0-7b723290fac2-cni-cfg\") pod \"kindnet-gdfw4\" (UID: \"375a0b4c-8f52-4769-83d0-7b723290fac2\") " pod="kube-system/kindnet-gdfw4"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.582991    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/375a0b4c-8f52-4769-83d0-7b723290fac2-xtables-lock\") pod \"kindnet-gdfw4\" (UID: \"375a0b4c-8f52-4769-83d0-7b723290fac2\") " pod="kube-system/kindnet-gdfw4"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: I0830 22:04:00.583016    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5sr6\" (UniqueName: \"kubernetes.io/projected/375a0b4c-8f52-4769-83d0-7b723290fac2-kube-api-access-g5sr6\") pod \"kindnet-gdfw4\" (UID: \"375a0b4c-8f52-4769-83d0-7b723290fac2\") " pod="kube-system/kindnet-gdfw4"
	Aug 30 22:04:00 multinode-994875 kubelet[1383]: W0830 22:04:00.830094    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/crio-901ee041d4a225d8266e61642ecd2a2d29e0e9f1bfd86d22b9b39fe627da1b4a WatchSource:0}: Error finding container 901ee041d4a225d8266e61642ecd2a2d29e0e9f1bfd86d22b9b39fe627da1b4a: Status 404 returned error can't find the container with id 901ee041d4a225d8266e61642ecd2a2d29e0e9f1bfd86d22b9b39fe627da1b4a
	Aug 30 22:04:01 multinode-994875 kubelet[1383]: W0830 22:04:01.161809    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/crio-c13482f27908c9f22441f64da23300b6b539fcc051e1e416cebcd7d8a82302d8 WatchSource:0}: Error finding container c13482f27908c9f22441f64da23300b6b539fcc051e1e416cebcd7d8a82302d8: Status 404 returned error can't find the container with id c13482f27908c9f22441f64da23300b6b539fcc051e1e416cebcd7d8a82302d8
	Aug 30 22:04:01 multinode-994875 kubelet[1383]: I0830 22:04:01.630427    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dn6c5" podStartSLOduration=1.630384265 podCreationTimestamp="2023-08-30 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 22:04:01.612768702 +0000 UTC m=+14.286920970" watchObservedRunningTime="2023-08-30 22:04:01.630384265 +0000 UTC m=+14.304536541"
	Aug 30 22:04:07 multinode-994875 kubelet[1383]: I0830 22:04:07.512020    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gdfw4" podStartSLOduration=7.511978626 podCreationTimestamp="2023-08-30 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 22:04:01.632198195 +0000 UTC m=+14.306350463" watchObservedRunningTime="2023-08-30 22:04:07.511978626 +0000 UTC m=+20.186130910"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.224944    1383 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.263707    1383 topology_manager.go:215] "Topology Admit Handler" podUID="a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117" podNamespace="kube-system" podName="coredns-5dd5756b68-24ps6"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.270161    1383 topology_manager.go:215] "Topology Admit Handler" podUID="69e4c211-d1a6-408e-b03c-6a194165f888" podNamespace="kube-system" podName="storage-provisioner"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.315959    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wln4\" (UniqueName: \"kubernetes.io/projected/a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117-kube-api-access-4wln4\") pod \"coredns-5dd5756b68-24ps6\" (UID: \"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117\") " pod="kube-system/coredns-5dd5756b68-24ps6"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.316018    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/69e4c211-d1a6-408e-b03c-6a194165f888-tmp\") pod \"storage-provisioner\" (UID: \"69e4c211-d1a6-408e-b03c-6a194165f888\") " pod="kube-system/storage-provisioner"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.316052    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8b59\" (UniqueName: \"kubernetes.io/projected/69e4c211-d1a6-408e-b03c-6a194165f888-kube-api-access-m8b59\") pod \"storage-provisioner\" (UID: \"69e4c211-d1a6-408e-b03c-6a194165f888\") " pod="kube-system/storage-provisioner"
	Aug 30 22:04:32 multinode-994875 kubelet[1383]: I0830 22:04:32.316078    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117-config-volume\") pod \"coredns-5dd5756b68-24ps6\" (UID: \"a0a2cbc5-d7b5-4a1a-bbc7-8eca90b6e117\") " pod="kube-system/coredns-5dd5756b68-24ps6"
	Aug 30 22:04:33 multinode-994875 kubelet[1383]: I0830 22:04:33.679688    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.679636428 podCreationTimestamp="2023-08-30 22:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 22:04:33.66665501 +0000 UTC m=+46.340807278" watchObservedRunningTime="2023-08-30 22:04:33.679636428 +0000 UTC m=+46.353788713"
	Aug 30 22:05:26 multinode-994875 kubelet[1383]: I0830 22:05:26.066523    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-24ps6" podStartSLOduration=86.066479382 podCreationTimestamp="2023-08-30 22:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-30 22:04:33.680160932 +0000 UTC m=+46.354313208" watchObservedRunningTime="2023-08-30 22:05:26.066479382 +0000 UTC m=+98.740631658"
	Aug 30 22:05:26 multinode-994875 kubelet[1383]: I0830 22:05:26.066702    1383 topology_manager.go:215] "Topology Admit Handler" podUID="41d04fb7-c258-4243-8f8b-10c26e7456ba" podNamespace="default" podName="busybox-5bc68d56bd-rdfhb"
	Aug 30 22:05:26 multinode-994875 kubelet[1383]: I0830 22:05:26.164928    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stcqc\" (UniqueName: \"kubernetes.io/projected/41d04fb7-c258-4243-8f8b-10c26e7456ba-kube-api-access-stcqc\") pod \"busybox-5bc68d56bd-rdfhb\" (UID: \"41d04fb7-c258-4243-8f8b-10c26e7456ba\") " pod="default/busybox-5bc68d56bd-rdfhb"
	Aug 30 22:05:26 multinode-994875 kubelet[1383]: W0830 22:05:26.410006    1383 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/crio-bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c WatchSource:0}: Error finding container bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c: Status 404 returned error can't find the container with id bf7b2fc7666116362b92f7d760dee52f571cad17410f5f0723b6c69e8ad3695c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-994875 -n multinode-994875
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-994875 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.832216318.exe start -p running-upgrade-631059 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0830 22:26:35.402310  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.832216318.exe start -p running-upgrade-631059 --memory=2200 --vm-driver=docker  --container-runtime=crio: (58.538450031s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-631059 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-631059 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.998070021s)

                                                
                                                
-- stdout --
	* [running-upgrade-631059] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-631059 in cluster running-upgrade-631059
	* Pulling base image ...
	* Updating the running docker "running-upgrade-631059" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:27:00.592543 1132918 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:27:00.592801 1132918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:27:00.592814 1132918 out.go:309] Setting ErrFile to fd 2...
	I0830 22:27:00.592820 1132918 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:27:00.593108 1132918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:27:00.593620 1132918 out.go:303] Setting JSON to false
	I0830 22:27:00.594844 1132918 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25755,"bootTime":1693408666,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 22:27:00.594914 1132918 start.go:138] virtualization:  
	I0830 22:27:00.597565 1132918 out.go:177] * [running-upgrade-631059] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:27:00.599763 1132918 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 22:27:00.601716 1132918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:27:00.599889 1132918 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0830 22:27:00.599928 1132918 notify.go:220] Checking for updates...
	I0830 22:27:00.604016 1132918 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:27:00.605695 1132918 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 22:27:00.613286 1132918 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:27:00.615967 1132918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:27:00.618440 1132918 config.go:182] Loaded profile config "running-upgrade-631059": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:27:00.621029 1132918 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:27:00.623197 1132918 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:27:00.649641 1132918 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:27:00.649744 1132918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:27:00.786997 1132918 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-30 22:27:00.776756236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:27:00.787805 1132918 docker.go:294] overlay module found
	I0830 22:27:00.790877 1132918 out.go:177] * Using the docker driver based on existing profile
	I0830 22:27:00.793271 1132918 start.go:298] selected driver: docker
	I0830 22:27:00.793292 1132918 start.go:902] validating driver "docker" against &{Name:running-upgrade-631059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-631059 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.20 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:27:00.793380 1132918 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:27:00.794128 1132918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:27:00.887957 1132918 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0830 22:27:00.911891 1132918 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-30 22:27:00.90173165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:27:00.912188 1132918 cni.go:84] Creating CNI manager for ""
	I0830 22:27:00.912197 1132918 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 22:27:00.912211 1132918 start_flags.go:319] config:
	{Name:running-upgrade-631059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-631059 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.20 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:27:00.915637 1132918 out.go:177] * Starting control plane node running-upgrade-631059 in cluster running-upgrade-631059
	I0830 22:27:00.917486 1132918 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 22:27:00.919565 1132918 out.go:177] * Pulling base image ...
	I0830 22:27:00.921810 1132918 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0830 22:27:00.921893 1132918 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0830 22:27:00.940495 1132918 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0830 22:27:00.940526 1132918 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0830 22:27:00.990086 1132918 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0830 22:27:00.990236 1132918 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/running-upgrade-631059/config.json ...
	I0830 22:27:00.990277 1132918 cache.go:107] acquiring lock: {Name:mkf5ab9713f972e910cdd35e849e7b313ff0cf80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990368 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0830 22:27:00.990378 1132918 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.676µs
	I0830 22:27:00.990392 1132918 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0830 22:27:00.990404 1132918 cache.go:107] acquiring lock: {Name:mk06076f7b31c5287734228bdc2942cac2953015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990434 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0830 22:27:00.990440 1132918 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.761µs
	I0830 22:27:00.990448 1132918 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0830 22:27:00.990457 1132918 cache.go:107] acquiring lock: {Name:mk09a8e8ef4f40d9e8afb0f142b26cbc91a70a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990482 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0830 22:27:00.990487 1132918 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.146µs
	I0830 22:27:00.990493 1132918 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0830 22:27:00.990504 1132918 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:27:00.990502 1132918 cache.go:107] acquiring lock: {Name:mk852a6aca1b3325ffa93aa8a30a68ac177b5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990532 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0830 22:27:00.990537 1132918 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.437µs
	I0830 22:27:00.990543 1132918 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0830 22:27:00.990538 1132918 start.go:365] acquiring machines lock for running-upgrade-631059: {Name:mk17aa5b673bde11e890edca5d07ec058b396a66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990552 1132918 cache.go:107] acquiring lock: {Name:mkfad56a71e12611916dea6bf70fd042ac640a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990576 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0830 22:27:00.990580 1132918 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.596µs
	I0830 22:27:00.990587 1132918 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0830 22:27:00.990596 1132918 start.go:369] acquired machines lock for "running-upgrade-631059" in 44.078µs
	I0830 22:27:00.990611 1132918 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:27:00.990616 1132918 fix.go:54] fixHost starting: 
	I0830 22:27:00.990618 1132918 cache.go:107] acquiring lock: {Name:mk208fc2f3b60244f9f2ab5a28145abac20df0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990647 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0830 22:27:00.990652 1132918 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 35.422µs
	I0830 22:27:00.990658 1132918 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0830 22:27:00.990666 1132918 cache.go:107] acquiring lock: {Name:mkd5c8e89021331bf56571747bef80c528c1deb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990696 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0830 22:27:00.990700 1132918 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.233µs
	I0830 22:27:00.990716 1132918 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0830 22:27:00.990725 1132918 cache.go:107] acquiring lock: {Name:mk0f0f8d201bdc1fca6426f53c7ecf3d4fa67ad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:27:00.990759 1132918 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0830 22:27:00.990764 1132918 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 40.853µs
	I0830 22:27:00.990770 1132918 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0830 22:27:00.990775 1132918 cache.go:87] Successfully saved all images to host disk.
	I0830 22:27:00.990873 1132918 cli_runner.go:164] Run: docker container inspect running-upgrade-631059 --format={{.State.Status}}
	I0830 22:27:01.008767 1132918 fix.go:102] recreateIfNeeded on running-upgrade-631059: state=Running err=<nil>
	W0830 22:27:01.008817 1132918 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:27:01.011157 1132918 out.go:177] * Updating the running docker "running-upgrade-631059" container ...
	I0830 22:27:01.012822 1132918 machine.go:88] provisioning docker machine ...
	I0830 22:27:01.012857 1132918 ubuntu.go:169] provisioning hostname "running-upgrade-631059"
	I0830 22:27:01.012945 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:01.031679 1132918 main.go:141] libmachine: Using SSH client type: native
	I0830 22:27:01.032201 1132918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34224 <nil> <nil>}
	I0830 22:27:01.032222 1132918 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-631059 && echo "running-upgrade-631059" | sudo tee /etc/hostname
	I0830 22:27:01.186648 1132918 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-631059
	
	I0830 22:27:01.186745 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:01.206441 1132918 main.go:141] libmachine: Using SSH client type: native
	I0830 22:27:01.206921 1132918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34224 <nil> <nil>}
	I0830 22:27:01.206943 1132918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-631059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-631059/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-631059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:27:01.350663 1132918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:27:01.350687 1132918 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:27:01.350824 1132918 ubuntu.go:177] setting up certificates
	I0830 22:27:01.350835 1132918 provision.go:83] configureAuth start
	I0830 22:27:01.350923 1132918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-631059
	I0830 22:27:01.370048 1132918 provision.go:138] copyHostCerts
	I0830 22:27:01.370121 1132918 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:27:01.370133 1132918 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:27:01.370211 1132918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:27:01.370321 1132918 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:27:01.370331 1132918 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:27:01.370361 1132918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:27:01.370426 1132918 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:27:01.370435 1132918 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:27:01.370459 1132918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:27:01.370517 1132918 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-631059 san=[192.168.70.20 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-631059]
	I0830 22:27:02.608010 1132918 provision.go:172] copyRemoteCerts
	I0830 22:27:02.608119 1132918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:27:02.608192 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:02.627120 1132918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34224 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/running-upgrade-631059/id_rsa Username:docker}
	I0830 22:27:02.741781 1132918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:27:02.806660 1132918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:27:02.847138 1132918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:27:02.889545 1132918 provision.go:86] duration metric: configureAuth took 1.538682278s
	I0830 22:27:02.889620 1132918 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:27:02.889885 1132918 config.go:182] Loaded profile config "running-upgrade-631059": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:27:02.890056 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:02.932657 1132918 main.go:141] libmachine: Using SSH client type: native
	I0830 22:27:02.933084 1132918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34224 <nil> <nil>}
	I0830 22:27:02.933109 1132918 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:27:03.542735 1132918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:27:03.542759 1132918 machine.go:91] provisioned docker machine in 2.529914224s
	I0830 22:27:03.542770 1132918 start.go:300] post-start starting for "running-upgrade-631059" (driver="docker")
	I0830 22:27:03.542780 1132918 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:27:03.542845 1132918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:27:03.542896 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:03.562572 1132918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34224 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/running-upgrade-631059/id_rsa Username:docker}
	I0830 22:27:03.664532 1132918 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:27:03.669025 1132918 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:27:03.669048 1132918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:27:03.669060 1132918 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:27:03.669068 1132918 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0830 22:27:03.669078 1132918 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:27:03.669230 1132918 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:27:03.669324 1132918 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:27:03.669432 1132918 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:27:03.679584 1132918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:27:03.721906 1132918 start.go:303] post-start completed in 179.120704ms
	I0830 22:27:03.721985 1132918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:27:03.722044 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:03.741752 1132918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34224 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/running-upgrade-631059/id_rsa Username:docker}
	I0830 22:27:03.848741 1132918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:27:03.854895 1132918 fix.go:56] fixHost completed within 2.86427002s
	I0830 22:27:03.854919 1132918 start.go:83] releasing machines lock for "running-upgrade-631059", held for 2.864315124s
	I0830 22:27:03.854997 1132918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-631059
	I0830 22:27:03.873730 1132918 ssh_runner.go:195] Run: cat /version.json
	I0830 22:27:03.873768 1132918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:27:03.873797 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:03.873812 1132918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-631059
	I0830 22:27:03.902467 1132918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34224 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/running-upgrade-631059/id_rsa Username:docker}
	I0830 22:27:03.905561 1132918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34224 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/running-upgrade-631059/id_rsa Username:docker}
	W0830 22:27:03.997655 1132918 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0830 22:27:03.997803 1132918 ssh_runner.go:195] Run: systemctl --version
	I0830 22:27:04.076443 1132918 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:27:04.295227 1132918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:27:04.306098 1132918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:27:04.372359 1132918 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:27:04.372440 1132918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:27:04.434348 1132918 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:27:04.434371 1132918 start.go:466] detecting cgroup driver to use...
	I0830 22:27:04.434403 1132918 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:27:04.434455 1132918 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:27:04.486312 1132918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:27:04.501795 1132918 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:27:04.501862 1132918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:27:04.518823 1132918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:27:04.535377 1132918 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0830 22:27:04.550753 1132918 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0830 22:27:04.550828 1132918 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:27:04.734170 1132918 docker.go:212] disabling docker service ...
	I0830 22:27:04.734239 1132918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:27:04.750752 1132918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:27:04.773496 1132918 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:27:05.143686 1132918 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:27:05.394185 1132918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:27:05.413793 1132918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:27:05.483198 1132918 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0830 22:27:05.483291 1132918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:27:05.513348 1132918 out.go:177] 
	W0830 22:27:05.515683 1132918 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0830 22:27:05.515709 1132918 out.go:239] * 
	* 
	W0830 22:27:05.516749 1132918 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:27:05.518352 1132918 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-631059 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-30 22:27:05.564431163 +0000 UTC m=+2980.366585767
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-631059
helpers_test.go:235: (dbg) docker inspect running-upgrade-631059:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703",
	        "Created": "2023-08-30T22:26:15.539737173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1129171,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T22:26:15.998580432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703/hostname",
	        "HostsPath": "/var/lib/docker/containers/01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703/hosts",
	        "LogPath": "/var/lib/docker/containers/01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703/01487bb67815339dcfd35b165d307bd787f83e5a493679affaa465a044410703-json.log",
	        "Name": "/running-upgrade-631059",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-631059:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-631059",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e41e2655f110b0913ae1013c9cc15f3b374c94a2f24b1df419d5473073c39a2d-init/diff:/var/lib/docker/overlay2/65f35d4f7bc28731f83ff56be45961ea2613109c4a833d74f215efbf28cb2c90/diff:/var/lib/docker/overlay2/2939f0515c29fb5448a83ea8cd4e3028daffcf9341df84d9412be10836e99c3c/diff:/var/lib/docker/overlay2/9a27d20b6971e734ac332690d8f704f892a6d8f7b1204c8766839fcfdadd2783/diff:/var/lib/docker/overlay2/f8137640168261f9de065c5ad4b6348b6c28d17ec5a146544adda3dfba3564de/diff:/var/lib/docker/overlay2/e4020d66d1f373c2a80b3a24d3eb9a54a8e3637c6d38d5cd91cae15e5d6f8b43/diff:/var/lib/docker/overlay2/b179e51d88f7b980301959c772d9cc674304f0d51c85cde1272ce51a2c9a20cf/diff:/var/lib/docker/overlay2/4a2c2949af88c54174183fcf241a2fa6fa8714aff94954dd1867cba9b3b71806/diff:/var/lib/docker/overlay2/628818144a8919662032ec83c82bac61d5590053d034c7a1ac930ebfae5c8e6a/diff:/var/lib/docker/overlay2/fe76eb5bd51b0b2e916b7486684149cb44446b88e88f482de599e7724ffe5d46/diff:/var/lib/docker/overlay2/eb0de7
91ba2953f23c201d61d03375098909c75c8f1ae57a208db53aed272fba/diff:/var/lib/docker/overlay2/af055118e5928681137d845f16d58c22b3d8f2adc57f3aef827bf4b80b463bc0/diff:/var/lib/docker/overlay2/b3094fda9fc14231945e91d9a139579d864c776d6b667fed6a9e8d5d916e2aae/diff:/var/lib/docker/overlay2/f2ca6a50744aaec8840b3ed16c22c46e5c92210621390c764b926c6c4c3f6c12/diff:/var/lib/docker/overlay2/07234c09057658f475dee3ca52d2e8ce4c76693c9c03522d17619d9d5a157197/diff:/var/lib/docker/overlay2/37695118e1da9cfda6e1f6905de3a3e4ca46a5769ed50a92f50dbb1660bc3e07/diff:/var/lib/docker/overlay2/9e7f062dabf68be621d5c026d72898e8580d1cb2ffb4906f57c2bd31c19f9d4b/diff:/var/lib/docker/overlay2/80683a9dd71aeeb8a4844aa6447edc57a4b0fca2faf38aa233d7f6bda4b5285b/diff:/var/lib/docker/overlay2/c8d88f55b5e1b6badce1a92cbc1f54729a8a88255cccb3f0c49f0ceeec4a54da/diff:/var/lib/docker/overlay2/2393907656bb29acff85c17f925fcb62de24706a7011766073d72372597044dd/diff:/var/lib/docker/overlay2/14f8ff8276a21b66cb42d1baf076713973cea5d5cac13b4f9c2685464ccdf61a/diff:/var/lib/d
ocker/overlay2/8e81e8f83510565f347b2300df2e478eacf6c23184620acb4ddc82c13da0458e/diff:/var/lib/docker/overlay2/83dc965602d3f8db214307f119330278377b08dc046b722d10931f1a73a2bd68/diff:/var/lib/docker/overlay2/a6eb4a24ac19919811f74e7d5878e468e93ca625afe0a9b5f1d1eaa03fde2377/diff:/var/lib/docker/overlay2/e6ec2239d9c3801f63512363560c5f34acc29cc9278ba336820bd03a4a18686c/diff:/var/lib/docker/overlay2/a10a16a21078444e8159122b605ed33ca19cf923b5d444fd6b14e577b6919496/diff:/var/lib/docker/overlay2/7f7c9d10b94b7aab556d22453ed6a2c0077f2402ff6449eb4aeceabe980a1877/diff:/var/lib/docker/overlay2/e337bd1b5db107e970cabb27ca68707a1cc89a2f34a85bac22b999d52b5668ce/diff:/var/lib/docker/overlay2/985120813fdc12aa6649b368f7bf4daa98d61cc486b26a6993ab0a3359f45852/diff:/var/lib/docker/overlay2/1e17eb9580ae3406cd4a5dcdfde1e0208a505d74e55d4f4e95275fbf34c42db4/diff:/var/lib/docker/overlay2/f4c9bed60a5f32c554190b3897690bd24481df1305c0e1bb505dbb4a339b497d/diff:/var/lib/docker/overlay2/907a5c91474e7ad4126f438b47df1bd5993a96efde2bb30c4d62f640e66
3a5b9/diff:/var/lib/docker/overlay2/da5bbee28dda83010b2e3f2cae00751ede00ce898b6285eede6dc99b1c5d1544/diff:/var/lib/docker/overlay2/fe2081e633672ce5817dd1398c869b0a42c603529904df343c6d74aa8466d63b/diff:/var/lib/docker/overlay2/318dcb3965396b79083a1626bba59c00513125da1b3f518d6d31dbd1aabc9cd2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e41e2655f110b0913ae1013c9cc15f3b374c94a2f24b1df419d5473073c39a2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e41e2655f110b0913ae1013c9cc15f3b374c94a2f24b1df419d5473073c39a2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e41e2655f110b0913ae1013c9cc15f3b374c94a2f24b1df419d5473073c39a2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-631059",
	                "Source": "/var/lib/docker/volumes/running-upgrade-631059/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-631059",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-631059",
	                "name.minikube.sigs.k8s.io": "running-upgrade-631059",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5f738ab3654fafe6edb652468f836668fb1063a9f52d2c0415290281a687028",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b5f738ab3654",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-631059": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.20"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "01487bb67815",
	                        "running-upgrade-631059"
	                    ],
	                    "NetworkID": "f9621fdca16ce7bc799bbee5c2ccf59aafc578209160751bbf115b5d1013ea25",
	                    "EndpointID": "8124fa9859b337fd3093a0ccbcfd4fcce136bc93a96d84c41b1fec350014b800",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.20",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:14",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-631059 -n running-upgrade-631059
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-631059 -n running-upgrade-631059: exit status 4 (551.166735ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:27:06.101475 1133618 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-631059" does not appear in /home/jenkins/minikube-integration/17145-984449/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-631059" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-631059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-631059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-631059: (2.69636793s)
--- FAIL: TestRunningBinaryUpgrade (67.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (99.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.1498723700.exe start -p missing-upgrade-549739 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.1498723700.exe start -p missing-upgrade-549739 --memory=2200 --driver=docker  --container-runtime=crio: (59.570948214s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-549739
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-549739: (10.338873958s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-549739
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-549739 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-549739 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (25.66238519s)

                                                
                                                
-- stdout --
	* [missing-upgrade-549739] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-549739 in cluster missing-upgrade-549739
	* Pulling base image ...
	* docker "missing-upgrade-549739" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:22:23.962235 1119996 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:22:23.962493 1119996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:22:23.962525 1119996 out.go:309] Setting ErrFile to fd 2...
	I0830 22:22:23.962545 1119996 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:22:23.962855 1119996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:22:23.963262 1119996 out.go:303] Setting JSON to false
	I0830 22:22:23.964440 1119996 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25478,"bootTime":1693408666,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 22:22:23.964547 1119996 start.go:138] virtualization:  
	I0830 22:22:23.967904 1119996 out.go:177] * [missing-upgrade-549739] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:22:23.970338 1119996 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 22:22:23.972203 1119996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:22:23.970506 1119996 notify.go:220] Checking for updates...
	I0830 22:22:23.974814 1119996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:22:23.977019 1119996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 22:22:23.978886 1119996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:22:23.980721 1119996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:22:23.983058 1119996 config.go:182] Loaded profile config "missing-upgrade-549739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:22:23.985669 1119996 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:22:23.987713 1119996 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:22:24.013051 1119996 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:22:24.013214 1119996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:22:24.104256 1119996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 22:22:24.092044807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:22:24.104377 1119996 docker.go:294] overlay module found
	I0830 22:22:24.106705 1119996 out.go:177] * Using the docker driver based on existing profile
	I0830 22:22:24.108359 1119996 start.go:298] selected driver: docker
	I0830 22:22:24.108375 1119996 start.go:902] validating driver "docker" against &{Name:missing-upgrade-549739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-549739 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.108 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:22:24.108495 1119996 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:22:24.109271 1119996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:22:24.184056 1119996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 22:22:24.174085241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:22:24.184394 1119996 cni.go:84] Creating CNI manager for ""
	I0830 22:22:24.184412 1119996 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 22:22:24.184427 1119996 start_flags.go:319] config:
	{Name:missing-upgrade-549739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-549739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.108 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:22:24.186415 1119996 out.go:177] * Starting control plane node missing-upgrade-549739 in cluster missing-upgrade-549739
	I0830 22:22:24.187976 1119996 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 22:22:24.190970 1119996 out.go:177] * Pulling base image ...
	I0830 22:22:24.192918 1119996 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0830 22:22:24.193013 1119996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0830 22:22:24.212107 1119996 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0830 22:22:24.212129 1119996 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0830 22:22:24.269442 1119996 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0830 22:22:24.269600 1119996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/missing-upgrade-549739/config.json ...
	I0830 22:22:24.269862 1119996 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:22:24.269902 1119996 start.go:365] acquiring machines lock for missing-upgrade-549739: {Name:mk4fd3580d7c33f0fb1f87b5b09942566d21c104 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.269955 1119996 start.go:369] acquired machines lock for "missing-upgrade-549739" in 34.38µs
	I0830 22:22:24.269969 1119996 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:22:24.269975 1119996 fix.go:54] fixHost starting: 
	I0830 22:22:24.270242 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	I0830 22:22:24.270581 1119996 cache.go:107] acquiring lock: {Name:mkf5ab9713f972e910cdd35e849e7b313ff0cf80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270647 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0830 22:22:24.270657 1119996 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.083µs
	I0830 22:22:24.270666 1119996 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0830 22:22:24.270674 1119996 cache.go:107] acquiring lock: {Name:mk06076f7b31c5287734228bdc2942cac2953015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270705 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0830 22:22:24.270710 1119996 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 37.334µs
	I0830 22:22:24.270716 1119996 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0830 22:22:24.270723 1119996 cache.go:107] acquiring lock: {Name:mk09a8e8ef4f40d9e8afb0f142b26cbc91a70a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270754 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0830 22:22:24.270759 1119996 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 37.718µs
	I0830 22:22:24.270766 1119996 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0830 22:22:24.270775 1119996 cache.go:107] acquiring lock: {Name:mk852a6aca1b3325ffa93aa8a30a68ac177b5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270809 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0830 22:22:24.270814 1119996 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 40.155µs
	I0830 22:22:24.270821 1119996 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0830 22:22:24.270830 1119996 cache.go:107] acquiring lock: {Name:mkfad56a71e12611916dea6bf70fd042ac640a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270858 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0830 22:22:24.270863 1119996 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 34.847µs
	I0830 22:22:24.270874 1119996 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0830 22:22:24.270882 1119996 cache.go:107] acquiring lock: {Name:mk208fc2f3b60244f9f2ab5a28145abac20df0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270907 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0830 22:22:24.270912 1119996 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.697µs
	I0830 22:22:24.270919 1119996 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0830 22:22:24.270928 1119996 cache.go:107] acquiring lock: {Name:mkd5c8e89021331bf56571747bef80c528c1deb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270952 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0830 22:22:24.270956 1119996 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 29.062µs
	I0830 22:22:24.270962 1119996 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0830 22:22:24.270970 1119996 cache.go:107] acquiring lock: {Name:mk0f0f8d201bdc1fca6426f53c7ecf3d4fa67ad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:22:24.270994 1119996 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0830 22:22:24.271000 1119996 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.867µs
	I0830 22:22:24.271006 1119996 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0830 22:22:24.271011 1119996 cache.go:87] Successfully saved all images to host disk.
	W0830 22:22:24.289469 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:24.289539 1119996 fix.go:102] recreateIfNeeded on missing-upgrade-549739: state= err=unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:24.289563 1119996 fix.go:107] machineExists: false. err=machine does not exist
	I0830 22:22:24.291588 1119996 out.go:177] * docker "missing-upgrade-549739" container is missing, will recreate.
	I0830 22:22:24.293219 1119996 delete.go:124] DEMOLISHING missing-upgrade-549739 ...
	I0830 22:22:24.293327 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:24.310581 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	W0830 22:22:24.310638 1119996 stop.go:75] unable to get state: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:24.310659 1119996 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:24.311140 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:24.327909 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:24.327975 1119996 delete.go:82] Unable to get host status for missing-upgrade-549739, assuming it has already been deleted: state: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:24.328051 1119996 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-549739
	W0830 22:22:24.345643 1119996 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-549739 returned with exit code 1
	I0830 22:22:24.345685 1119996 kic.go:367] could not find the container missing-upgrade-549739 to remove it. will try anyways
	I0830 22:22:24.345745 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:24.362724 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	W0830 22:22:24.362786 1119996 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:24.362869 1119996 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-549739 /bin/bash -c "sudo init 0"
	W0830 22:22:24.379507 1119996 cli_runner.go:211] docker exec --privileged -t missing-upgrade-549739 /bin/bash -c "sudo init 0" returned with exit code 1
	I0830 22:22:24.379546 1119996 oci.go:647] error shutdown missing-upgrade-549739: docker exec --privileged -t missing-upgrade-549739 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:25.379759 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:25.396399 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:25.396480 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:25.396494 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:25.396522 1119996 retry.go:31] will retry after 481.382642ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:25.878142 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:25.895597 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:25.895663 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:25.895685 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:25.895710 1119996 retry.go:31] will retry after 888.326503ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:26.784383 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:26.801498 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:26.801557 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:26.801572 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:26.801597 1119996 retry.go:31] will retry after 1.349743911s: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:28.151620 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:28.168340 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:28.168402 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:28.168419 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:28.168444 1119996 retry.go:31] will retry after 1.214615399s: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:29.383514 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:29.406116 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:29.406174 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:29.406182 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:29.406205 1119996 retry.go:31] will retry after 2.194472856s: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:31.602602 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:31.621548 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:31.621613 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:31.621629 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:31.621662 1119996 retry.go:31] will retry after 4.828637369s: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:36.452668 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:36.484066 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:36.484131 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:36.484140 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:36.484164 1119996 retry.go:31] will retry after 5.01255245s: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:41.498964 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:41.518905 1119996 cli_runner.go:211] docker container inspect missing-upgrade-549739 --format={{.State.Status}} returned with exit code 1
	I0830 22:22:41.518961 1119996 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	I0830 22:22:41.518975 1119996 oci.go:661] temporary error: container missing-upgrade-549739 status is  but expect it to be exited
	I0830 22:22:41.519006 1119996 oci.go:88] couldn't shut down missing-upgrade-549739 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-549739": docker container inspect missing-upgrade-549739 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-549739
	 
	I0830 22:22:41.519157 1119996 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-549739
	I0830 22:22:41.544663 1119996 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-549739
	W0830 22:22:41.563829 1119996 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-549739 returned with exit code 1
	I0830 22:22:41.563915 1119996 cli_runner.go:164] Run: docker network inspect missing-upgrade-549739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:22:41.588735 1119996 cli_runner.go:164] Run: docker network rm missing-upgrade-549739
	I0830 22:22:41.699814 1119996 fix.go:114] Sleeping 1 second for extra luck!
	I0830 22:22:42.699929 1119996 start.go:125] createHost starting for "" (driver="docker")
	I0830 22:22:42.701984 1119996 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0830 22:22:42.702110 1119996 start.go:159] libmachine.API.Create for "missing-upgrade-549739" (driver="docker")
	I0830 22:22:42.702128 1119996 client.go:168] LocalClient.Create starting
	I0830 22:22:42.702673 1119996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem
	I0830 22:22:42.702711 1119996 main.go:141] libmachine: Decoding PEM data...
	I0830 22:22:42.702727 1119996 main.go:141] libmachine: Parsing certificate...
	I0830 22:22:42.702784 1119996 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem
	I0830 22:22:42.702801 1119996 main.go:141] libmachine: Decoding PEM data...
	I0830 22:22:42.702815 1119996 main.go:141] libmachine: Parsing certificate...
	I0830 22:22:42.703066 1119996 cli_runner.go:164] Run: docker network inspect missing-upgrade-549739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 22:22:42.726669 1119996 cli_runner.go:211] docker network inspect missing-upgrade-549739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 22:22:42.726786 1119996 network_create.go:281] running [docker network inspect missing-upgrade-549739] to gather additional debugging logs...
	I0830 22:22:42.726803 1119996 cli_runner.go:164] Run: docker network inspect missing-upgrade-549739
	W0830 22:22:42.756149 1119996 cli_runner.go:211] docker network inspect missing-upgrade-549739 returned with exit code 1
	I0830 22:22:42.756178 1119996 network_create.go:284] error running [docker network inspect missing-upgrade-549739]: docker network inspect missing-upgrade-549739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-549739 not found
	I0830 22:22:42.756191 1119996 network_create.go:286] output of [docker network inspect missing-upgrade-549739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-549739 not found
	
	** /stderr **
	I0830 22:22:42.756261 1119996 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:22:42.785435 1119996 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1011c5a7d786 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:38:8f:57:4b} reservation:<nil>}
	I0830 22:22:42.785893 1119996 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5b839e887fc7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:37:51:3b:d9} reservation:<nil>}
	I0830 22:22:42.786220 1119996 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-01beebb14e41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:18:c0:ab:ec} reservation:<nil>}
	I0830 22:22:42.786726 1119996 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000e8bdb0}
	I0830 22:22:42.786751 1119996 network_create.go:123] attempt to create docker network missing-upgrade-549739 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0830 22:22:42.786813 1119996 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-549739 missing-upgrade-549739
	I0830 22:22:42.870134 1119996 network_create.go:107] docker network missing-upgrade-549739 192.168.76.0/24 created
	I0830 22:22:42.870166 1119996 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-549739" container
	I0830 22:22:42.870255 1119996 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 22:22:42.891919 1119996 cli_runner.go:164] Run: docker volume create missing-upgrade-549739 --label name.minikube.sigs.k8s.io=missing-upgrade-549739 --label created_by.minikube.sigs.k8s.io=true
	I0830 22:22:42.913426 1119996 oci.go:103] Successfully created a docker volume missing-upgrade-549739
	I0830 22:22:42.913508 1119996 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-549739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-549739 --entrypoint /usr/bin/test -v missing-upgrade-549739:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0830 22:22:43.454114 1119996 oci.go:107] Successfully prepared a docker volume missing-upgrade-549739
	I0830 22:22:43.454148 1119996 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0830 22:22:43.454314 1119996 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 22:22:43.454436 1119996 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 22:22:43.572854 1119996 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-549739 --name missing-upgrade-549739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-549739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-549739 --network missing-upgrade-549739 --ip 192.168.76.2 --volume missing-upgrade-549739:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0830 22:22:44.058045 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Running}}
	I0830 22:22:44.083515 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	I0830 22:22:44.106580 1119996 cli_runner.go:164] Run: docker exec missing-upgrade-549739 stat /var/lib/dpkg/alternatives/iptables
	I0830 22:22:44.210277 1119996 oci.go:144] the created container "missing-upgrade-549739" has a running status.
	I0830 22:22:44.210307 1119996 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa...
	I0830 22:22:44.617406 1119996 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 22:22:44.658825 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	I0830 22:22:44.690476 1119996 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 22:22:44.690495 1119996 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-549739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 22:22:44.763893 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	I0830 22:22:44.785806 1119996 machine.go:88] provisioning docker machine ...
	I0830 22:22:44.785839 1119996 ubuntu.go:169] provisioning hostname "missing-upgrade-549739"
	I0830 22:22:44.785920 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:44.806805 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:44.807278 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:44.807295 1119996 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-549739 && echo "missing-upgrade-549739" | sudo tee /etc/hostname
	I0830 22:22:44.968815 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-549739
	
	I0830 22:22:44.968914 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:44.993698 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:44.994187 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:44.994215 1119996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-549739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-549739/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-549739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:22:45.160651 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:22:45.160677 1119996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:22:45.160707 1119996 ubuntu.go:177] setting up certificates
	I0830 22:22:45.160719 1119996 provision.go:83] configureAuth start
	I0830 22:22:45.160797 1119996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-549739
	I0830 22:22:45.191515 1119996 provision.go:138] copyHostCerts
	I0830 22:22:45.191586 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:22:45.191597 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:22:45.191679 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:22:45.191791 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:22:45.191797 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:22:45.191828 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:22:45.191893 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:22:45.191898 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:22:45.191922 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:22:45.191992 1119996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-549739 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-549739]
	I0830 22:22:45.613838 1119996 provision.go:172] copyRemoteCerts
	I0830 22:22:45.613906 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:22:45.613954 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:45.632349 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:45.735950 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:22:45.763904 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:22:45.789535 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:22:45.813289 1119996 provision.go:86] duration metric: configureAuth took 652.557031ms
	I0830 22:22:45.813312 1119996 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:22:45.813496 1119996 config.go:182] Loaded profile config "missing-upgrade-549739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:22:45.813607 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:45.832038 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:45.832542 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:45.832563 1119996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:22:46.244998 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:22:46.245022 1119996 machine.go:91] provisioned docker machine in 1.459193191s
	I0830 22:22:46.245032 1119996 client.go:171] LocalClient.Create took 3.542899167s
	I0830 22:22:46.245045 1119996 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-549739" took 3.542935213s
	I0830 22:22:46.245052 1119996 start.go:300] post-start starting for "missing-upgrade-549739" (driver="docker")
	I0830 22:22:46.245060 1119996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:22:46.245124 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:22:46.245184 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:46.264601 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:46.367079 1119996 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:22:46.371181 1119996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:22:46.371208 1119996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:22:46.371220 1119996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:22:46.371238 1119996 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0830 22:22:46.371248 1119996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:22:46.371320 1119996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:22:46.371397 1119996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:22:46.371499 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:22:46.380306 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:22:46.403867 1119996 start.go:303] post-start completed in 158.800724ms
	I0830 22:22:46.404239 1119996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-549739
	I0830 22:22:46.421936 1119996 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/missing-upgrade-549739/config.json ...
	I0830 22:22:46.422216 1119996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:22:46.422274 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:46.440262 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:46.536102 1119996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:22:46.542277 1119996 start.go:128] duration metric: createHost completed in 3.842312943s
	I0830 22:22:46.542378 1119996 cli_runner.go:164] Run: docker container inspect missing-upgrade-549739 --format={{.State.Status}}
	W0830 22:22:46.560659 1119996 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:22:46.560684 1119996 machine.go:88] provisioning docker machine ...
	I0830 22:22:46.560703 1119996 ubuntu.go:169] provisioning hostname "missing-upgrade-549739"
	I0830 22:22:46.560768 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:46.578860 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:46.579302 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:46.579314 1119996 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-549739 && echo "missing-upgrade-549739" | sudo tee /etc/hostname
	I0830 22:22:46.728480 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-549739
	
	I0830 22:22:46.728561 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:46.747772 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:46.748210 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:46.748232 1119996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-549739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-549739/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-549739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:22:46.890207 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:22:46.890232 1119996 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:22:46.890248 1119996 ubuntu.go:177] setting up certificates
	I0830 22:22:46.890264 1119996 provision.go:83] configureAuth start
	I0830 22:22:46.890328 1119996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-549739
	I0830 22:22:46.908017 1119996 provision.go:138] copyHostCerts
	I0830 22:22:46.908083 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:22:46.908094 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:22:46.908167 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:22:46.908268 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:22:46.908280 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:22:46.908308 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:22:46.908371 1119996 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:22:46.908381 1119996 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:22:46.908407 1119996 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:22:46.908458 1119996 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-549739 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-549739]
	I0830 22:22:47.817864 1119996 provision.go:172] copyRemoteCerts
	I0830 22:22:47.817972 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:22:47.818018 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:47.843020 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:47.942251 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:22:47.967228 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:22:47.989954 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:22:48.014004 1119996 provision.go:86] duration metric: configureAuth took 1.12372351s
	I0830 22:22:48.014029 1119996 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:22:48.014225 1119996 config.go:182] Loaded profile config "missing-upgrade-549739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:22:48.014353 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:48.032179 1119996 main.go:141] libmachine: Using SSH client type: native
	I0830 22:22:48.032615 1119996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I0830 22:22:48.032637 1119996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:22:48.367118 1119996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:22:48.367147 1119996 machine.go:91] provisioned docker machine in 1.806456067s
	I0830 22:22:48.367158 1119996 start.go:300] post-start starting for "missing-upgrade-549739" (driver="docker")
	I0830 22:22:48.367168 1119996 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:22:48.367234 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:22:48.367287 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:48.388995 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:48.491299 1119996 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:22:48.495630 1119996 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:22:48.495693 1119996 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:22:48.495713 1119996 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:22:48.495721 1119996 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0830 22:22:48.495731 1119996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:22:48.495791 1119996 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:22:48.495872 1119996 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:22:48.495982 1119996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:22:48.505234 1119996 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:22:48.529685 1119996 start.go:303] post-start completed in 162.511097ms
	I0830 22:22:48.529774 1119996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:22:48.529827 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:48.548593 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:48.643467 1119996 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:22:48.649173 1119996 fix.go:56] fixHost completed within 24.379189202s
	I0830 22:22:48.649195 1119996 start.go:83] releasing machines lock for "missing-upgrade-549739", held for 24.379232049s
	I0830 22:22:48.649264 1119996 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-549739
	I0830 22:22:48.667444 1119996 ssh_runner.go:195] Run: cat /version.json
	I0830 22:22:48.667503 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:48.667735 1119996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:22:48.667794 1119996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-549739
	I0830 22:22:48.696644 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	I0830 22:22:48.699749 1119996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/missing-upgrade-549739/id_rsa Username:docker}
	W0830 22:22:48.898617 1119996 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0830 22:22:48.898700 1119996 ssh_runner.go:195] Run: systemctl --version
	I0830 22:22:48.904458 1119996 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:22:49.016710 1119996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:22:49.022445 1119996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:22:49.047917 1119996 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:22:49.048051 1119996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:22:49.086326 1119996 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:22:49.086347 1119996 start.go:466] detecting cgroup driver to use...
	I0830 22:22:49.086378 1119996 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:22:49.086427 1119996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:22:49.114203 1119996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:22:49.126111 1119996 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:22:49.126218 1119996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:22:49.139877 1119996 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:22:49.152257 1119996 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0830 22:22:49.165523 1119996 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0830 22:22:49.165587 1119996 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:22:49.266731 1119996 docker.go:212] disabling docker service ...
	I0830 22:22:49.266842 1119996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:22:49.280120 1119996 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:22:49.292804 1119996 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:22:49.398943 1119996 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:22:49.514459 1119996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:22:49.526518 1119996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:22:49.544428 1119996 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0830 22:22:49.544509 1119996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:22:49.557821 1119996 out.go:177] 
	W0830 22:22:49.559445 1119996 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0830 22:22:49.559516 1119996 out.go:239] * 
	* 
	W0830 22:22:49.560479 1119996 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:22:49.561965 1119996 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-549739 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-08-30 22:22:49.609921483 +0000 UTC m=+2724.412076079
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-549739
helpers_test.go:235: (dbg) docker inspect missing-upgrade-549739:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774",
	        "Created": "2023-08-30T22:22:43.603479254Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1120996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T22:22:44.050201458Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774/hosts",
	        "LogPath": "/var/lib/docker/containers/0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774/0d604b6de0951e3be4d8f2ba016d7377e5017d9ac1f59cbeb8cbd69d9c7fd774-json.log",
	        "Name": "/missing-upgrade-549739",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-549739:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-549739",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/279471af4e052d95ec10b8576d4e64d58516b3ec5d593d466bf05d54a580472d-init/diff:/var/lib/docker/overlay2/65f35d4f7bc28731f83ff56be45961ea2613109c4a833d74f215efbf28cb2c90/diff:/var/lib/docker/overlay2/2939f0515c29fb5448a83ea8cd4e3028daffcf9341df84d9412be10836e99c3c/diff:/var/lib/docker/overlay2/9a27d20b6971e734ac332690d8f704f892a6d8f7b1204c8766839fcfdadd2783/diff:/var/lib/docker/overlay2/f8137640168261f9de065c5ad4b6348b6c28d17ec5a146544adda3dfba3564de/diff:/var/lib/docker/overlay2/e4020d66d1f373c2a80b3a24d3eb9a54a8e3637c6d38d5cd91cae15e5d6f8b43/diff:/var/lib/docker/overlay2/b179e51d88f7b980301959c772d9cc674304f0d51c85cde1272ce51a2c9a20cf/diff:/var/lib/docker/overlay2/4a2c2949af88c54174183fcf241a2fa6fa8714aff94954dd1867cba9b3b71806/diff:/var/lib/docker/overlay2/628818144a8919662032ec83c82bac61d5590053d034c7a1ac930ebfae5c8e6a/diff:/var/lib/docker/overlay2/fe76eb5bd51b0b2e916b7486684149cb44446b88e88f482de599e7724ffe5d46/diff:/var/lib/docker/overlay2/eb0de7
91ba2953f23c201d61d03375098909c75c8f1ae57a208db53aed272fba/diff:/var/lib/docker/overlay2/af055118e5928681137d845f16d58c22b3d8f2adc57f3aef827bf4b80b463bc0/diff:/var/lib/docker/overlay2/b3094fda9fc14231945e91d9a139579d864c776d6b667fed6a9e8d5d916e2aae/diff:/var/lib/docker/overlay2/f2ca6a50744aaec8840b3ed16c22c46e5c92210621390c764b926c6c4c3f6c12/diff:/var/lib/docker/overlay2/07234c09057658f475dee3ca52d2e8ce4c76693c9c03522d17619d9d5a157197/diff:/var/lib/docker/overlay2/37695118e1da9cfda6e1f6905de3a3e4ca46a5769ed50a92f50dbb1660bc3e07/diff:/var/lib/docker/overlay2/9e7f062dabf68be621d5c026d72898e8580d1cb2ffb4906f57c2bd31c19f9d4b/diff:/var/lib/docker/overlay2/80683a9dd71aeeb8a4844aa6447edc57a4b0fca2faf38aa233d7f6bda4b5285b/diff:/var/lib/docker/overlay2/c8d88f55b5e1b6badce1a92cbc1f54729a8a88255cccb3f0c49f0ceeec4a54da/diff:/var/lib/docker/overlay2/2393907656bb29acff85c17f925fcb62de24706a7011766073d72372597044dd/diff:/var/lib/docker/overlay2/14f8ff8276a21b66cb42d1baf076713973cea5d5cac13b4f9c2685464ccdf61a/diff:/var/lib/d
ocker/overlay2/8e81e8f83510565f347b2300df2e478eacf6c23184620acb4ddc82c13da0458e/diff:/var/lib/docker/overlay2/83dc965602d3f8db214307f119330278377b08dc046b722d10931f1a73a2bd68/diff:/var/lib/docker/overlay2/a6eb4a24ac19919811f74e7d5878e468e93ca625afe0a9b5f1d1eaa03fde2377/diff:/var/lib/docker/overlay2/e6ec2239d9c3801f63512363560c5f34acc29cc9278ba336820bd03a4a18686c/diff:/var/lib/docker/overlay2/a10a16a21078444e8159122b605ed33ca19cf923b5d444fd6b14e577b6919496/diff:/var/lib/docker/overlay2/7f7c9d10b94b7aab556d22453ed6a2c0077f2402ff6449eb4aeceabe980a1877/diff:/var/lib/docker/overlay2/e337bd1b5db107e970cabb27ca68707a1cc89a2f34a85bac22b999d52b5668ce/diff:/var/lib/docker/overlay2/985120813fdc12aa6649b368f7bf4daa98d61cc486b26a6993ab0a3359f45852/diff:/var/lib/docker/overlay2/1e17eb9580ae3406cd4a5dcdfde1e0208a505d74e55d4f4e95275fbf34c42db4/diff:/var/lib/docker/overlay2/f4c9bed60a5f32c554190b3897690bd24481df1305c0e1bb505dbb4a339b497d/diff:/var/lib/docker/overlay2/907a5c91474e7ad4126f438b47df1bd5993a96efde2bb30c4d62f640e66
3a5b9/diff:/var/lib/docker/overlay2/da5bbee28dda83010b2e3f2cae00751ede00ce898b6285eede6dc99b1c5d1544/diff:/var/lib/docker/overlay2/fe2081e633672ce5817dd1398c869b0a42c603529904df343c6d74aa8466d63b/diff:/var/lib/docker/overlay2/318dcb3965396b79083a1626bba59c00513125da1b3f518d6d31dbd1aabc9cd2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/279471af4e052d95ec10b8576d4e64d58516b3ec5d593d466bf05d54a580472d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/279471af4e052d95ec10b8576d4e64d58516b3ec5d593d466bf05d54a580472d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/279471af4e052d95ec10b8576d4e64d58516b3ec5d593d466bf05d54a580472d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-549739",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-549739/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-549739",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-549739",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-549739",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3121dc4e453feec327ab6bc9be6afc4c71fd32299a5664a6215e74040ce26904",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3121dc4e453f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-549739": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0d604b6de095",
	                        "missing-upgrade-549739"
	                    ],
	                    "NetworkID": "ef1ca4df8148afeac71603b0dba7273a20616f1747968b5e540b3101f3e062e7",
	                    "EndpointID": "6c0c484bbe51620bddcadf588d7ce3036dc357c76b6c00038cad780559924116",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-549739 -n missing-upgrade-549739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-549739 -n missing-upgrade-549739: exit status 6 (317.338869ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:22:49.933012 1121995 status.go:415] kubeconfig endpoint: got: 192.168.70.108:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-549739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-549739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-549739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-549739: (2.06722335s)
--- FAIL: TestMissingContainerUpgrade (99.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.2531139357.exe start -p stopped-upgrade-836210 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0830 22:20:43.072830  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.2531139357.exe start -p stopped-upgrade-836210 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m31.487432556s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.2531139357.exe -p stopped-upgrade-836210 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.2531139357.exe -p stopped-upgrade-836210 stop: (2.219500642s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-836210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-836210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (13.911935743s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-836210] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-836210 in cluster stopped-upgrade-836210
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-836210" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:20:56.025676 1114305 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:20:56.025834 1114305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:20:56.025844 1114305 out.go:309] Setting ErrFile to fd 2...
	I0830 22:20:56.025850 1114305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:20:56.026169 1114305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:20:56.026967 1114305 out.go:303] Setting JSON to false
	I0830 22:20:56.028785 1114305 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25390,"bootTime":1693408666,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 22:20:56.028857 1114305 start.go:138] virtualization:  
	I0830 22:20:56.031625 1114305 out.go:177] * [stopped-upgrade-836210] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:20:56.034243 1114305 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0830 22:20:56.034261 1114305 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 22:20:56.034268 1114305 notify.go:220] Checking for updates...
	I0830 22:20:56.036019 1114305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:20:56.038211 1114305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:20:56.040121 1114305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 22:20:56.041955 1114305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:20:56.043836 1114305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:20:56.046359 1114305 config.go:182] Loaded profile config "stopped-upgrade-836210": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:20:56.049523 1114305 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0830 22:20:56.052888 1114305 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:20:56.100360 1114305 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:20:56.100445 1114305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:20:56.159756 1114305 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0830 22:20:56.203679 1114305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 22:20:56.193454585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:20:56.203816 1114305 docker.go:294] overlay module found
	I0830 22:20:56.206635 1114305 out.go:177] * Using the docker driver based on existing profile
	I0830 22:20:56.208486 1114305 start.go:298] selected driver: docker
	I0830 22:20:56.208516 1114305 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-836210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-836210 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.115 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:56.208626 1114305 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:20:56.209368 1114305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:20:56.285262 1114305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 22:20:56.275485509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:20:56.285566 1114305 cni.go:84] Creating CNI manager for ""
	I0830 22:20:56.285583 1114305 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 22:20:56.285592 1114305 start_flags.go:319] config:
	{Name:stopped-upgrade-836210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-836210 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.115 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:20:56.287858 1114305 out.go:177] * Starting control plane node stopped-upgrade-836210 in cluster stopped-upgrade-836210
	I0830 22:20:56.289572 1114305 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 22:20:56.291480 1114305 out.go:177] * Pulling base image ...
	I0830 22:20:56.293386 1114305 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0830 22:20:56.293550 1114305 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0830 22:20:56.311605 1114305 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0830 22:20:56.312187 1114305 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0830 22:20:56.312711 1114305 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0830 22:20:56.376747 1114305 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0830 22:20:56.376957 1114305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/stopped-upgrade-836210/config.json ...
	I0830 22:20:56.376998 1114305 cache.go:107] acquiring lock: {Name:mkf5ab9713f972e910cdd35e849e7b313ff0cf80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377095 1114305 cache.go:115] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0830 22:20:56.377106 1114305 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.845µs
	I0830 22:20:56.377115 1114305 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0830 22:20:56.377124 1114305 cache.go:107] acquiring lock: {Name:mk06076f7b31c5287734228bdc2942cac2953015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377307 1114305 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0830 22:20:56.377337 1114305 cache.go:107] acquiring lock: {Name:mkfad56a71e12611916dea6bf70fd042ac640a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377451 1114305 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0830 22:20:56.377597 1114305 cache.go:107] acquiring lock: {Name:mk09a8e8ef4f40d9e8afb0f142b26cbc91a70a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377703 1114305 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0830 22:20:56.377751 1114305 cache.go:107] acquiring lock: {Name:mk208fc2f3b60244f9f2ab5a28145abac20df0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377829 1114305 cache.go:107] acquiring lock: {Name:mk852a6aca1b3325ffa93aa8a30a68ac177b5cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377915 1114305 cache.go:107] acquiring lock: {Name:mk0f0f8d201bdc1fca6426f53c7ecf3d4fa67ad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.377955 1114305 cache.go:107] acquiring lock: {Name:mkd5c8e89021331bf56571747bef80c528c1deb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:20:56.378698 1114305 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0830 22:20:56.378848 1114305 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0830 22:20:56.379447 1114305 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0830 22:20:56.379828 1114305 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0830 22:20:56.380104 1114305 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0830 22:20:56.380549 1114305 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0830 22:20:56.380873 1114305 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0830 22:20:56.381581 1114305 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0830 22:20:56.381692 1114305 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0830 22:20:56.381925 1114305 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0830 22:20:56.382814 1114305 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0830 22:20:56.749635 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0830 22:20:56.801413 1114305 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0830 22:20:56.801472 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I0830 22:20:56.802181 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0830 22:20:56.803680 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0830 22:20:56.811314 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0830 22:20:56.819361 1114305 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0830 22:20:56.819455 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W0830 22:20:56.825761 1114305 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0830 22:20:56.825857 1114305 cache.go:162] opening:  /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0830 22:20:56.961794 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0830 22:20:56.961822 1114305 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 584.078848ms
	I0830 22:20:56.961836 1114305 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  65.35 KiB / 287.99 MiB [>] 0.02% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  11.99 MiB / 287.99 MiB [>] 4.16% ? p/s ?I0830 22:20:57.457355 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0830 22:20:57.457383 1114305 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.079556144s
	I0830 22:20:57.457396 1114305 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0830 22:20:57.479769 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0830 22:20:57.479846 1114305 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.101930989s
	I0830 22:20:57.479872 1114305 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.16 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.16 MiB I0830 22:20:57.902154 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0830 22:20:57.902434 1114305 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.525284151s
	I0830 22:20:57.902545 1114305 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.16 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 40.38 MiB     > gcr.io/k8s-minikube/kicbase...:  29.81 MiB / 287.99 MiB  10.35% 40.38 MiBI0830 22:20:58.591630 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0830 22:20:58.591769 1114305 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.214160487s
	I0830 22:20:58.591958 1114305 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.38 MiB    > gcr.io/k8s-minikube/kicbase...:  50.06 MiB / 287.99 MiB  17.38% 40.38 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 40.38 MiB    > gcr.io/k8s-minikube/kicbase...:  73.57 MiB / 287.99 MiB  25.55% 40.38 MiBI0830 22:20:59.300372 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0830 22:20:59.300399 1114305 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.923066293s
	I0830 22:20:59.300412 1114305 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  95.32 MiB / 287.99 MiB  33.10% 42.64 MiB    > gcr.io/k8s-minikube/kicbase...:  113.64 MiB / 287.99 MiB  39.46% 42.64 Mi    > gcr.io/k8s-minikube/kicbase...:  136.45 MiB / 287.99 MiB  47.38% 42.64 Mi    > gcr.io/k8s-minikube/kicbase...:  156.72 MiB / 287.99 MiB  54.42% 46.49 Mi    > gcr.io/k8s-minikube/kicbase...:  168.73 MiB / 287.99 MiB  58.59% 46.49 Mi    > gcr.io/k8s-minikube/kicbase...:  171.80 MiB / 287.99 MiB  59.66% 46.49 Mi    > gcr.io/k8s-minikube/kicbase...:  190.29 MiB / 287.99 MiB  66.07% 47.10 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 47.10 Mi    > gcr.io/k8s-minikube/kicbase...:  215.67 MiB / 287.99 MiB  74.89% 47.10 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 49.19 Mi    > gcr.io/k8s-minikube/kicbase...:  242.42 MiB / 287.99 MiB  84.18% 49.19 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 49.19 Mi    > gcr.io/k8s-minikube/kicbase...:  270.19 MiB / 287.99 MiB  93.
82% 49.48 MiI0830 22:21:01.944811 1114305 cache.go:157] /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0830 22:21:01.945395 1114305 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.567431717s
	I0830 22:21:01.945421 1114305 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0830 22:21:01.945437 1114305 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 49.48 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 49.48 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.20 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.20 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 48.20 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 45.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 45.09 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 45.09 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 42.18 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 43.50 MI0830 22:21:03.661895 1114305 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0830 22:21:03.661910 1114305 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0830 22:21:03.827307 1114305 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0830 22:21:03.827345 1114305 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:21:03.827396 1114305 start.go:365] acquiring machines lock for stopped-upgrade-836210: {Name:mk5f159bf3e0b2e1e0e7dba188a6788a9d455a39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:21:03.827470 1114305 start.go:369] acquired machines lock for "stopped-upgrade-836210" in 53.686µs
	I0830 22:21:03.827491 1114305 start.go:96] Skipping create...Using existing machine configuration
	I0830 22:21:03.827496 1114305 fix.go:54] fixHost starting: 
	I0830 22:21:03.827797 1114305 cli_runner.go:164] Run: docker container inspect stopped-upgrade-836210 --format={{.State.Status}}
	I0830 22:21:03.845986 1114305 fix.go:102] recreateIfNeeded on stopped-upgrade-836210: state=Stopped err=<nil>
	W0830 22:21:03.846013 1114305 fix.go:128] unexpected machine state, will restart: <nil>
	I0830 22:21:03.849262 1114305 out.go:177] * Restarting existing docker container for "stopped-upgrade-836210" ...
	I0830 22:21:03.851314 1114305 cli_runner.go:164] Run: docker start stopped-upgrade-836210
	I0830 22:21:04.170521 1114305 cli_runner.go:164] Run: docker container inspect stopped-upgrade-836210 --format={{.State.Status}}
	I0830 22:21:04.202773 1114305 kic.go:426] container "stopped-upgrade-836210" state is running.
	I0830 22:21:04.203234 1114305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-836210
	I0830 22:21:04.231216 1114305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/stopped-upgrade-836210/config.json ...
	I0830 22:21:04.231621 1114305 machine.go:88] provisioning docker machine ...
	I0830 22:21:04.231640 1114305 ubuntu.go:169] provisioning hostname "stopped-upgrade-836210"
	I0830 22:21:04.231704 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:04.257463 1114305 main.go:141] libmachine: Using SSH client type: native
	I0830 22:21:04.257992 1114305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0830 22:21:04.258008 1114305 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-836210 && echo "stopped-upgrade-836210" | sudo tee /etc/hostname
	I0830 22:21:04.258970 1114305 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0830 22:21:07.421828 1114305 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-836210
	
	I0830 22:21:07.421911 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:07.444382 1114305 main.go:141] libmachine: Using SSH client type: native
	I0830 22:21:07.444906 1114305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0830 22:21:07.444934 1114305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-836210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-836210/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-836210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:21:07.586279 1114305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:21:07.586306 1114305 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17145-984449/.minikube CaCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17145-984449/.minikube}
	I0830 22:21:07.586333 1114305 ubuntu.go:177] setting up certificates
	I0830 22:21:07.586341 1114305 provision.go:83] configureAuth start
	I0830 22:21:07.586403 1114305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-836210
	I0830 22:21:07.604969 1114305 provision.go:138] copyHostCerts
	I0830 22:21:07.605039 1114305 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem, removing ...
	I0830 22:21:07.605048 1114305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem
	I0830 22:21:07.605200 1114305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/ca.pem (1082 bytes)
	I0830 22:21:07.605317 1114305 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem, removing ...
	I0830 22:21:07.605323 1114305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem
	I0830 22:21:07.605352 1114305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/cert.pem (1123 bytes)
	I0830 22:21:07.605412 1114305 exec_runner.go:144] found /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem, removing ...
	I0830 22:21:07.605420 1114305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem
	I0830 22:21:07.605444 1114305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17145-984449/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17145-984449/.minikube/key.pem (1679 bytes)
	I0830 22:21:07.605487 1114305 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-836210 san=[192.168.59.115 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-836210]
	I0830 22:21:07.978490 1114305 provision.go:172] copyRemoteCerts
	I0830 22:21:07.978582 1114305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:21:07.978628 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:07.999640 1114305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/stopped-upgrade-836210/id_rsa Username:docker}
	I0830 22:21:08.102482 1114305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0830 22:21:08.128898 1114305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0830 22:21:08.152718 1114305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:21:08.176361 1114305 provision.go:86] duration metric: configureAuth took 590.004339ms
	I0830 22:21:08.176396 1114305 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:21:08.176585 1114305 config.go:182] Loaded profile config "stopped-upgrade-836210": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0830 22:21:08.176697 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:08.195556 1114305 main.go:141] libmachine: Using SSH client type: native
	I0830 22:21:08.196039 1114305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34201 <nil> <nil>}
	I0830 22:21:08.196068 1114305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0830 22:21:08.619230 1114305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0830 22:21:08.619254 1114305 machine.go:91] provisioned docker machine in 4.387620355s
	I0830 22:21:08.619264 1114305 start.go:300] post-start starting for "stopped-upgrade-836210" (driver="docker")
	I0830 22:21:08.619273 1114305 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:21:08.619339 1114305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:21:08.619390 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:08.639242 1114305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/stopped-upgrade-836210/id_rsa Username:docker}
	I0830 22:21:08.742944 1114305 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:21:08.747202 1114305 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:21:08.747228 1114305 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:21:08.747240 1114305 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:21:08.747246 1114305 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0830 22:21:08.747257 1114305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/addons for local assets ...
	I0830 22:21:08.747319 1114305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17145-984449/.minikube/files for local assets ...
	I0830 22:21:08.747414 1114305 filesync.go:149] local asset: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem -> 9898252.pem in /etc/ssl/certs
	I0830 22:21:08.747524 1114305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 22:21:08.757690 1114305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/ssl/certs/9898252.pem --> /etc/ssl/certs/9898252.pem (1708 bytes)
	I0830 22:21:08.785268 1114305 start.go:303] post-start completed in 165.987958ms
	I0830 22:21:08.785350 1114305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:21:08.785403 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:08.803531 1114305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/stopped-upgrade-836210/id_rsa Username:docker}
	I0830 22:21:08.901542 1114305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:21:08.907901 1114305 fix.go:56] fixHost completed within 5.080395036s
	I0830 22:21:08.907928 1114305 start.go:83] releasing machines lock for "stopped-upgrade-836210", held for 5.080448755s
	I0830 22:21:08.908016 1114305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-836210
	I0830 22:21:08.926732 1114305 ssh_runner.go:195] Run: cat /version.json
	I0830 22:21:08.926794 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:08.927035 1114305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:21:08.927094 1114305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-836210
	I0830 22:21:08.953482 1114305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/stopped-upgrade-836210/id_rsa Username:docker}
	I0830 22:21:08.966956 1114305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34201 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/stopped-upgrade-836210/id_rsa Username:docker}
	W0830 22:21:09.057880 1114305 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0830 22:21:09.057985 1114305 ssh_runner.go:195] Run: systemctl --version
	I0830 22:21:09.204186 1114305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0830 22:21:09.314298 1114305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:21:09.320604 1114305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:21:09.344689 1114305 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:21:09.344817 1114305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:21:09.375329 1114305 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 22:21:09.375354 1114305 start.go:466] detecting cgroup driver to use...
	I0830 22:21:09.375400 1114305 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:21:09.375466 1114305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0830 22:21:09.406455 1114305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0830 22:21:09.419082 1114305 docker.go:196] disabling cri-docker service (if available) ...
	I0830 22:21:09.419185 1114305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0830 22:21:09.432199 1114305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0830 22:21:09.444879 1114305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0830 22:21:09.458706 1114305 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0830 22:21:09.458843 1114305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0830 22:21:09.566840 1114305 docker.go:212] disabling docker service ...
	I0830 22:21:09.566975 1114305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0830 22:21:09.582184 1114305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0830 22:21:09.595774 1114305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0830 22:21:09.704736 1114305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0830 22:21:09.825721 1114305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0830 22:21:09.839449 1114305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:21:09.858415 1114305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0830 22:21:09.858494 1114305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0830 22:21:09.873086 1114305 out.go:177] 
	W0830 22:21:09.874802 1114305 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0830 22:21:09.874826 1114305 out.go:239] * 
	* 
	W0830 22:21:09.875834 1114305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0830 22:21:09.878356 1114305 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-836210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (107.64s)

                                                
                                    

Test pass (268/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.45
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.1/json-events 9.32
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.64
22 TestAddons/Setup 170.18
24 TestAddons/parallel/Registry 16.28
26 TestAddons/parallel/InspektorGadget 10.86
27 TestAddons/parallel/MetricsServer 5.88
30 TestAddons/parallel/CSI 65.21
31 TestAddons/parallel/Headlamp 11.57
32 TestAddons/parallel/CloudSpanner 5.75
35 TestAddons/serial/GCPAuth/Namespaces 0.18
36 TestAddons/StoppedEnableDisable 12.31
37 TestCertOptions 40.87
38 TestCertExpiration 464.41
40 TestForceSystemdFlag 41.17
41 TestForceSystemdEnv 40.37
47 TestErrorSpam/setup 33.45
48 TestErrorSpam/start 0.85
49 TestErrorSpam/status 1.12
50 TestErrorSpam/pause 1.91
51 TestErrorSpam/unpause 1.93
52 TestErrorSpam/stop 1.45
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 78.1
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.94
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.12
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.01
64 TestFunctional/serial/CacheCmd/cache/add_local 1.13
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.27
69 TestFunctional/serial/CacheCmd/cache/delete 0.12
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 35.99
73 TestFunctional/serial/ComponentHealth 0.1
74 TestFunctional/serial/LogsCmd 1.89
75 TestFunctional/serial/LogsFileCmd 1.9
76 TestFunctional/serial/InvalidService 4.68
78 TestFunctional/parallel/ConfigCmd 0.51
79 TestFunctional/parallel/DashboardCmd 9.47
80 TestFunctional/parallel/DryRun 0.51
81 TestFunctional/parallel/InternationalLanguage 0.23
82 TestFunctional/parallel/StatusCmd 1.14
86 TestFunctional/parallel/ServiceCmdConnect 6.67
87 TestFunctional/parallel/AddonsCmd 0.15
90 TestFunctional/parallel/SSHCmd 0.87
91 TestFunctional/parallel/CpCmd 1.53
93 TestFunctional/parallel/FileSync 0.41
94 TestFunctional/parallel/CertSync 2.25
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
102 TestFunctional/parallel/License 0.45
103 TestFunctional/parallel/Version/short 0.09
104 TestFunctional/parallel/Version/components 0.83
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.66
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.56
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
109 TestFunctional/parallel/ImageCommands/ImageBuild 5.07
110 TestFunctional/parallel/ImageCommands/Setup 2.07
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.99
115 TestFunctional/parallel/ServiceCmd/DeployApp 11.33
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.13
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.26
118 TestFunctional/parallel/ServiceCmd/List 0.5
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
121 TestFunctional/parallel/ServiceCmd/Format 0.54
122 TestFunctional/parallel/ServiceCmd/URL 0.5
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 218.62
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.12
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.34
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
133 TestFunctional/parallel/ProfileCmd/profile_list 0.42
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 8.57
136 TestFunctional/parallel/MountCmd/specific-port 2.29
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/delete_addon-resizer_images 0.15
145 TestFunctional/delete_my-image_image 0.05
146 TestFunctional/delete_minikube_cached_images 0.04
150 TestIngressAddonLegacy/StartLegacyK8sCluster 88.76
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.51
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.8
157 TestJSONOutput/start/Command 52.88
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.82
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.75
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.98
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 46.44
183 TestKicCustomNetwork/use_default_bridge_network 35.95
184 TestKicExistingNetwork 33.91
185 TestKicCustomSubnet 34.89
186 TestKicStaticIP 35.8
187 TestMainNoArgs 0.07
188 TestMinikubeProfile 74.72
191 TestMountStart/serial/StartWithMountFirst 7.17
192 TestMountStart/serial/VerifyMountFirst 0.3
193 TestMountStart/serial/StartWithMountSecond 7.11
194 TestMountStart/serial/VerifyMountSecond 0.29
195 TestMountStart/serial/DeleteFirst 1.71
196 TestMountStart/serial/VerifyMountPostDelete 0.29
197 TestMountStart/serial/Stop 1.22
198 TestMountStart/serial/RestartStopped 8.85
199 TestMountStart/serial/VerifyMountPostStop 0.28
202 TestMultiNode/serial/FreshStart2Nodes 128.86
203 TestMultiNode/serial/DeployApp2Nodes 5.87
205 TestMultiNode/serial/AddNode 21.35
206 TestMultiNode/serial/ProfileList 0.36
207 TestMultiNode/serial/CopyFile 11.24
208 TestMultiNode/serial/StopNode 2.37
209 TestMultiNode/serial/StartAfterStop 12.78
210 TestMultiNode/serial/RestartKeepsNodes 119.33
211 TestMultiNode/serial/DeleteNode 5.23
212 TestMultiNode/serial/StopMultiNode 24.08
213 TestMultiNode/serial/RestartMultiNode 81.05
214 TestMultiNode/serial/ValidateNameConflict 36.15
219 TestPreload 171.46
221 TestScheduledStopUnix 109.95
224 TestInsufficientStorage 13.64
227 TestKubernetesUpgrade 375.84
230 TestPause/serial/Start 89.2
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
233 TestNoKubernetes/serial/StartWithK8s 44.51
234 TestNoKubernetes/serial/StartWithStopK8s 8.12
235 TestNoKubernetes/serial/Start 9.73
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
237 TestNoKubernetes/serial/ProfileList 1.04
238 TestNoKubernetes/serial/Stop 1.24
239 TestNoKubernetes/serial/StartNoArgs 7.86
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
248 TestNetworkPlugins/group/false 3.83
252 TestPause/serial/SecondStartNoReconfiguration 47.61
253 TestPause/serial/Pause 1.14
254 TestPause/serial/VerifyStatus 0.46
255 TestPause/serial/Unpause 0.96
256 TestPause/serial/PauseAgain 1.37
257 TestPause/serial/DeletePaused 3.19
258 TestPause/serial/VerifyDeletedResources 0.49
259 TestStoppedBinaryUpgrade/Setup 1.21
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
269 TestNetworkPlugins/group/auto/Start 78.16
270 TestNetworkPlugins/group/auto/KubeletFlags 0.35
271 TestNetworkPlugins/group/auto/NetCatPod 13.54
272 TestNetworkPlugins/group/auto/DNS 0.23
273 TestNetworkPlugins/group/auto/Localhost 0.21
274 TestNetworkPlugins/group/auto/HairPin 0.19
275 TestNetworkPlugins/group/kindnet/Start 89.55
276 TestNetworkPlugins/group/calico/Start 80.36
277 TestNetworkPlugins/group/calico/ControllerPod 5.04
278 TestNetworkPlugins/group/calico/KubeletFlags 0.32
279 TestNetworkPlugins/group/calico/NetCatPod 11.49
280 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
281 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
282 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
283 TestNetworkPlugins/group/calico/DNS 0.39
284 TestNetworkPlugins/group/calico/Localhost 0.25
285 TestNetworkPlugins/group/calico/HairPin 0.26
286 TestNetworkPlugins/group/kindnet/DNS 0.34
287 TestNetworkPlugins/group/kindnet/Localhost 0.27
288 TestNetworkPlugins/group/kindnet/HairPin 0.28
289 TestNetworkPlugins/group/custom-flannel/Start 74.66
290 TestNetworkPlugins/group/enable-default-cni/Start 92.32
291 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
292 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
293 TestNetworkPlugins/group/custom-flannel/DNS 0.23
294 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
295 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
296 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
297 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.49
298 TestNetworkPlugins/group/flannel/Start 71.24
299 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
300 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
301 TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
302 TestNetworkPlugins/group/bridge/Start 91.97
303 TestNetworkPlugins/group/flannel/ControllerPod 5.05
304 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
305 TestNetworkPlugins/group/flannel/NetCatPod 12.38
306 TestNetworkPlugins/group/flannel/DNS 0.23
307 TestNetworkPlugins/group/flannel/Localhost 0.18
308 TestNetworkPlugins/group/flannel/HairPin 0.2
310 TestStartStop/group/old-k8s-version/serial/FirstStart 142.99
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
312 TestNetworkPlugins/group/bridge/NetCatPod 13.4
313 TestNetworkPlugins/group/bridge/DNS 0.29
314 TestNetworkPlugins/group/bridge/Localhost 0.22
315 TestNetworkPlugins/group/bridge/HairPin 0.24
317 TestStartStop/group/no-preload/serial/FirstStart 65.51
318 TestStartStop/group/no-preload/serial/DeployApp 10.45
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
320 TestStartStop/group/no-preload/serial/Stop 12.12
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/no-preload/serial/SecondStart 349.62
323 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
324 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
325 TestStartStop/group/old-k8s-version/serial/Stop 12.17
326 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
327 TestStartStop/group/old-k8s-version/serial/SecondStart 436
328 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.03
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.53
331 TestStartStop/group/no-preload/serial/Pause 4.92
333 TestStartStop/group/embed-certs/serial/FirstStart 87.47
334 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
335 TestStartStop/group/embed-certs/serial/DeployApp 8.46
336 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
338 TestStartStop/group/embed-certs/serial/Stop 12.59
339 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
340 TestStartStop/group/old-k8s-version/serial/Pause 3.64
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.27
343 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
344 TestStartStop/group/embed-certs/serial/SecondStart 347.72
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.5
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.3
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 351.6
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
353 TestStartStop/group/embed-certs/serial/Pause 3.68
355 TestStartStop/group/newest-cni/serial/FirstStart 45.06
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
358 TestStartStop/group/newest-cni/serial/Stop 1.27
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
360 TestStartStop/group/newest-cni/serial/SecondStart 34
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.04
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.52
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.12
365 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
366 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
368 TestStartStop/group/newest-cni/serial/Pause 3.56
x
+
TestDownloadOnly/v1.16.0/json-events (16.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-136653 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-136653 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.449801849s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-136653
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-136653: exit status 85 (76.40172ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-136653 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |          |
	|         | -p download-only-136653        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:37:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:37:25.289550  989830 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:37:25.289761  989830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:25.289787  989830 out.go:309] Setting ErrFile to fd 2...
	I0830 21:37:25.289807  989830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:25.290150  989830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	W0830 21:37:25.290321  989830 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17145-984449/.minikube/config/config.json: open /home/jenkins/minikube-integration/17145-984449/.minikube/config/config.json: no such file or directory
	I0830 21:37:25.290771  989830 out.go:303] Setting JSON to true
	I0830 21:37:25.291911  989830 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22780,"bootTime":1693408666,"procs":419,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:37:25.292002  989830 start.go:138] virtualization:  
	I0830 21:37:25.295479  989830 out.go:97] [download-only-136653] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:37:25.297998  989830 out.go:169] MINIKUBE_LOCATION=17145
	W0830 21:37:25.295724  989830 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball: no such file or directory
	I0830 21:37:25.295785  989830 notify.go:220] Checking for updates...
	I0830 21:37:25.301648  989830 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:37:25.303532  989830 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:37:25.305482  989830 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:37:25.307428  989830 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0830 21:37:25.311618  989830 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 21:37:25.311866  989830 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:37:25.340157  989830 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:37:25.340236  989830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:25.421186  989830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-30 21:37:25.411129245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:25.421289  989830 docker.go:294] overlay module found
	I0830 21:37:25.423236  989830 out.go:97] Using the docker driver based on user configuration
	I0830 21:37:25.423262  989830 start.go:298] selected driver: docker
	I0830 21:37:25.423268  989830 start.go:902] validating driver "docker" against <nil>
	I0830 21:37:25.423367  989830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:25.489115  989830 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-30 21:37:25.479993755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:25.489297  989830 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 21:37:25.489558  989830 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0830 21:37:25.489722  989830 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 21:37:25.491798  989830 out.go:169] Using Docker driver with root privileges
	I0830 21:37:25.493599  989830 cni.go:84] Creating CNI manager for ""
	I0830 21:37:25.493617  989830 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:37:25.493625  989830 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0830 21:37:25.493636  989830 start_flags.go:319] config:
	{Name:download-only-136653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-136653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:37:25.495599  989830 out.go:97] Starting control plane node download-only-136653 in cluster download-only-136653
	I0830 21:37:25.495618  989830 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 21:37:25.497551  989830 out.go:97] Pulling base image ...
	I0830 21:37:25.497575  989830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:37:25.497708  989830 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 21:37:25.514175  989830 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0830 21:37:25.514344  989830 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0830 21:37:25.514452  989830 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0830 21:37:25.577628  989830 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0830 21:37:25.577658  989830 cache.go:57] Caching tarball of preloaded images
	I0830 21:37:25.577792  989830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:37:25.580360  989830 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0830 21:37:25.580377  989830 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:37:25.708908  989830 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0830 21:37:31.693486  989830 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0830 21:37:33.545341  989830 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:37:33.545447  989830 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:37:34.512787  989830 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0830 21:37:34.513157  989830 profile.go:148] Saving config to /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/download-only-136653/config.json ...
	I0830 21:37:34.513188  989830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/download-only-136653/config.json: {Name:mkea60b7261ff57f3de21af25341ca1ec1c55672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 21:37:34.513835  989830 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0830 21:37:34.514037  989830 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-136653"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (9.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-136653 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-136653 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.321957098s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (9.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-136653
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-136653: exit status 85 (84.172194ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-136653 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |          |
	|         | -p download-only-136653        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-136653 | jenkins | v1.31.2 | 30 Aug 23 21:37 UTC |          |
	|         | -p download-only-136653        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 21:37:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 21:37:41.819756  989909 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:37:41.820050  989909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:41.820061  989909 out.go:309] Setting ErrFile to fd 2...
	I0830 21:37:41.820067  989909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:37:41.820351  989909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	W0830 21:37:41.820473  989909 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17145-984449/.minikube/config/config.json: open /home/jenkins/minikube-integration/17145-984449/.minikube/config/config.json: no such file or directory
	I0830 21:37:41.820699  989909 out.go:303] Setting JSON to true
	I0830 21:37:41.821747  989909 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22796,"bootTime":1693408666,"procs":417,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:37:41.821822  989909 start.go:138] virtualization:  
	I0830 21:37:41.824038  989909 out.go:97] [download-only-136653] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:37:41.826237  989909 out.go:169] MINIKUBE_LOCATION=17145
	I0830 21:37:41.824307  989909 notify.go:220] Checking for updates...
	I0830 21:37:41.829942  989909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:37:41.831824  989909 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:37:41.834191  989909 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:37:41.835990  989909 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0830 21:37:41.839773  989909 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 21:37:41.840284  989909 config.go:182] Loaded profile config "download-only-136653": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0830 21:37:41.840382  989909 start.go:810] api.Load failed for download-only-136653: filestore "download-only-136653": Docker machine "download-only-136653" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 21:37:41.840492  989909 driver.go:373] Setting default libvirt URI to qemu:///system
	W0830 21:37:41.840523  989909 start.go:810] api.Load failed for download-only-136653: filestore "download-only-136653": Docker machine "download-only-136653" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 21:37:41.865215  989909 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:37:41.865296  989909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:41.953635  989909 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 21:37:41.943588331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:41.953734  989909 docker.go:294] overlay module found
	I0830 21:37:41.956052  989909 out.go:97] Using the docker driver based on existing profile
	I0830 21:37:41.956078  989909 start.go:298] selected driver: docker
	I0830 21:37:41.956087  989909 start.go:902] validating driver "docker" against &{Name:download-only-136653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-136653 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:37:41.956262  989909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:37:42.034601  989909 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 21:37:42.024124337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:37:42.035056  989909 cni.go:84] Creating CNI manager for ""
	I0830 21:37:42.035073  989909 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0830 21:37:42.035087  989909 start_flags.go:319] config:
	{Name:download-only-136653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-136653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:37:42.037255  989909 out.go:97] Starting control plane node download-only-136653 in cluster download-only-136653
	I0830 21:37:42.037283  989909 cache.go:122] Beginning downloading kic base image for docker with crio
	I0830 21:37:42.039277  989909 out.go:97] Pulling base image ...
	I0830 21:37:42.039303  989909 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:37:42.039503  989909 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local docker daemon
	I0830 21:37:42.056519  989909 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad to local cache
	I0830 21:37:42.056678  989909 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory
	I0830 21:37:42.056709  989909 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad in local cache directory, skipping pull
	I0830 21:37:42.056718  989909 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad exists in cache, skipping pull
	I0830 21:37:42.056726  989909 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad as a tarball
	I0830 21:37:42.108908  989909 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	I0830 21:37:42.108948  989909 cache.go:57] Caching tarball of preloaded images
	I0830 21:37:42.109149  989909 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0830 21:37:42.111713  989909 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0830 21:37:42.111750  989909 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4 ...
	I0830 21:37:42.243866  989909 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:44f3d096b9be2c2ed42e6b0d364bc859 -> /home/jenkins/minikube-integration/17145-984449/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-136653"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-136653
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-633577 --alsologtostderr --binary-mirror http://127.0.0.1:33847 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-633577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-633577
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/Setup (170.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-934429 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-934429 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m50.176859979s)
--- PASS: TestAddons/Setup (170.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 93.389922ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-74x2w" [5c1a0a58-4b38-4d8c-b5da-9aa551fc0068] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.035745523s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4jnsf" [b2e2e34a-4172-482d-998a-c32144619319] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015358943s
addons_test.go:316: (dbg) Run:  kubectl --context addons-934429 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-934429 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-934429 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.025545541s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 ip
2023/08/30 21:40:58 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qh7xk" [5bbc0968-e98f-4c73-8699-20ac3794b810] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014528925s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-934429
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-934429: (5.847063253s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 11.246768ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-wjzsv" [92424fd5-04a2-4f7a-a4b6-c9fb99352034] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01861938s
addons_test.go:391: (dbg) Run:  kubectl --context addons-934429 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 8.321751ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-934429 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-934429 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5343c288-c2df-4004-b713-6932f91ad229] Pending
helpers_test.go:344: "task-pv-pod" [5343c288-c2df-4004-b713-6932f91ad229] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5343c288-c2df-4004-b713-6932f91ad229] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.026153908s
addons_test.go:560: (dbg) Run:  kubectl --context addons-934429 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934429 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934429 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934429 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-934429 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-934429 delete pod task-pv-pod: (1.156698194s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-934429 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-934429 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934429 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-934429 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e62a952a-b064-4997-822e-2f6b1afd3d43] Pending
helpers_test.go:344: "task-pv-pod-restore" [e62a952a-b064-4997-822e-2f6b1afd3d43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e62a952a-b064-4997-822e-2f6b1afd3d43] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.016902393s
addons_test.go:602: (dbg) Run:  kubectl --context addons-934429 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-934429 delete pod task-pv-pod-restore: (1.147113843s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-934429 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-934429 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-934429 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.781848812s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-934429 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-934429 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-934429 --alsologtostderr -v=1: (1.52893578s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-8bj69" [74ccfc49-ba0f-41f9-8a8f-729154a2a38e] Pending
helpers_test.go:344: "headlamp-699c48fb74-8bj69" [74ccfc49-ba0f-41f9-8a8f-729154a2a38e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-8bj69" [74ccfc49-ba0f-41f9-8a8f-729154a2a38e] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.038552847s
--- PASS: TestAddons/parallel/Headlamp (11.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-8pntj" [ced40481-4935-4772-83fc-b0ee0415014a] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.032865151s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-934429
--- PASS: TestAddons/parallel/CloudSpanner (5.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-934429 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-934429 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-934429
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-934429: (12.033756864s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-934429
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-934429
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-934429
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (40.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-056894 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0830 22:18:46.115491  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-056894 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.139591016s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-056894 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-056894 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-056894 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-056894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-056894
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-056894: (1.982784679s)
--- PASS: TestCertOptions (40.87s)

                                                
                                    
x
+
TestCertExpiration (464.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-467573 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0830 22:18:32.359149  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-467573 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.979549464s)
E0830 22:19:18.291868  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-467573 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-467573 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (4m0.953561021s)
helpers_test.go:175: Cleaning up "cert-expiration-467573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-467573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-467573: (2.473987412s)
--- PASS: TestCertExpiration (464.41s)

                                                
                                    
x
+
TestForceSystemdFlag (41.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-549767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-549767 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.133991592s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-549767 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-549767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-549767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-549767: (2.609632827s)
--- PASS: TestForceSystemdFlag (41.17s)

                                                
                                    
x
+
TestForceSystemdEnv (40.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-618362 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-618362 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.98121223s)
helpers_test.go:175: Cleaning up "force-systemd-env-618362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-618362
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-618362: (2.385412697s)
--- PASS: TestForceSystemdEnv (40.37s)

                                                
                                    
x
+
TestErrorSpam/setup (33.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-888877 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-888877 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-888877 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-888877 --driver=docker  --container-runtime=crio: (33.448361783s)
--- PASS: TestErrorSpam/setup (33.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 stop: (1.242026465s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-888877 --log_dir /tmp/nospam-888877 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17145-984449/.minikube/files/etc/test/nested/copy/989825/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0830 21:45:43.073242  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.079266  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.089503  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.109746  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.150010  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.230260  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.390603  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:43.711093  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:44.351946  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:45.632706  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:48.192909  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:45:53.313630  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:46:03.553789  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 21:46:24.034110  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-540436 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.098275493s)
--- PASS: TestFunctional/serial/StartWithProxy (78.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --alsologtostderr -v=8
E0830 21:47:04.994356  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-540436 --alsologtostderr -v=8: (42.93573278s)
functional_test.go:659: soft start took 42.936234868s for "functional-540436" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-540436 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:3.1: (1.348738257s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:3.3: (1.441795969s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 cache add registry.k8s.io/pause:latest: (1.222836167s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-540436 /tmp/TestFunctionalserialCacheCmdcacheadd_local711379810/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache add minikube-local-cache-test:functional-540436
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache delete minikube-local-cache-test:functional-540436
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-540436
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (335.788962ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 cache reload: (1.197248912s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 kubectl -- --context functional-540436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-540436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-540436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.993229547s)
functional_test.go:757: restart took 35.99334725s for "functional-540436" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-540436 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 logs: (1.893904626s)
--- PASS: TestFunctional/serial/LogsCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 logs --file /tmp/TestFunctionalserialLogsFileCmd4278130942/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 logs --file /tmp/TestFunctionalserialLogsFileCmd4278130942/001/logs.txt: (1.898256228s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-540436 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-540436
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-540436: exit status 115 (640.440051ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32387 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-540436 delete -f testdata/invalidsvc.yaml
E0830 21:48:26.914866  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 config get cpus: exit status 14 (111.52466ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 config get cpus: exit status 14 (80.885839ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-540436 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-540436 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1016856: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-540436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.360431ms)

                                                
                                                
-- stdout --
	* [functional-540436] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 21:52:22.606783 1016590 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:52:22.606948 1016590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:22.606955 1016590 out.go:309] Setting ErrFile to fd 2...
	I0830 21:52:22.606961 1016590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:22.607243 1016590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 21:52:22.607609 1016590 out.go:303] Setting JSON to false
	I0830 21:52:22.608555 1016590 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23677,"bootTime":1693408666,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:52:22.608623 1016590 start.go:138] virtualization:  
	I0830 21:52:22.611260 1016590 out.go:177] * [functional-540436] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 21:52:22.613632 1016590 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 21:52:22.615725 1016590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:52:22.613831 1016590 notify.go:220] Checking for updates...
	I0830 21:52:22.619255 1016590 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:52:22.621239 1016590 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:52:22.623327 1016590 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 21:52:22.625111 1016590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:52:22.627211 1016590 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:52:22.627813 1016590 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:52:22.651927 1016590 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:52:22.652022 1016590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:52:22.745895 1016590 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 21:52:22.735902137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:52:22.746005 1016590 docker.go:294] overlay module found
	I0830 21:52:22.748316 1016590 out.go:177] * Using the docker driver based on existing profile
	I0830 21:52:22.750019 1016590 start.go:298] selected driver: docker
	I0830 21:52:22.750041 1016590 start.go:902] validating driver "docker" against &{Name:functional-540436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:52:22.750151 1016590 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:52:22.752953 1016590 out.go:177] 
	W0830 21:52:22.755539 1016590 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0830 21:52:22.757655 1016590 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-540436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-540436 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (228.971081ms)

                                                
                                                
-- stdout --
	* [functional-540436] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 21:52:23.118656 1016698 out.go:296] Setting OutFile to fd 1 ...
	I0830 21:52:23.119024 1016698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:23.119056 1016698 out.go:309] Setting ErrFile to fd 2...
	I0830 21:52:23.119094 1016698 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 21:52:23.119750 1016698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 21:52:23.120299 1016698 out.go:303] Setting JSON to false
	I0830 21:52:23.121775 1016698 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23678,"bootTime":1693408666,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 21:52:23.121903 1016698 start.go:138] virtualization:  
	I0830 21:52:23.126030 1016698 out.go:177] * [functional-540436] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0830 21:52:23.128542 1016698 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 21:52:23.130575 1016698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 21:52:23.128661 1016698 notify.go:220] Checking for updates...
	I0830 21:52:23.134625 1016698 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 21:52:23.137043 1016698 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 21:52:23.139318 1016698 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 21:52:23.141471 1016698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 21:52:23.143795 1016698 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 21:52:23.144354 1016698 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 21:52:23.172162 1016698 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 21:52:23.172280 1016698 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 21:52:23.270945 1016698 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 21:52:23.260530868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 21:52:23.271055 1016698 docker.go:294] overlay module found
	I0830 21:52:23.273366 1016698 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0830 21:52:23.275320 1016698 start.go:298] selected driver: docker
	I0830 21:52:23.275343 1016698 start.go:902] validating driver "docker" against &{Name:functional-540436 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693218425-17145@sha256:b79ac53e13f1f04e9fd9bdd5eb7d937a6931d86b3cdddf46e6e66227aea180ad Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-540436 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 21:52:23.275499 1016698 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 21:52:23.278184 1016698 out.go:177] 
	W0830 21:52:23.280038 1016698 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0830 21:52:23.281934 1016698 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-540436 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-540436 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-dgl64" [095c533f-e33f-487e-bcdf-bed8cfa56651] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-dgl64" [095c533f-e33f-487e-bcdf-bed8cfa56651] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.016180651s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32681
functional_test.go:1674: http://192.168.49.2:32681: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-dgl64

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32681
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh -n functional-540436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 cp functional-540436:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4005546492/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh -n functional-540436 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/989825/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /etc/test/nested/copy/989825/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/989825.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /etc/ssl/certs/989825.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/989825.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /usr/share/ca-certificates/989825.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9898252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /etc/ssl/certs/9898252.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9898252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /usr/share/ca-certificates/9898252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-540436 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh "sudo systemctl is-active docker": exit status 1 (347.948085ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh "sudo systemctl is-active containerd": exit status 1 (373.359779ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-540436 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-540436
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-540436 image ls --format short --alsologtostderr:
I0830 21:52:28.158916 1017129 out.go:296] Setting OutFile to fd 1 ...
I0830 21:52:28.159125 1017129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:28.159130 1017129 out.go:309] Setting ErrFile to fd 2...
I0830 21:52:28.159135 1017129 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:28.159394 1017129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
I0830 21:52:28.159972 1017129 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:28.160101 1017129 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:28.160625 1017129 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
I0830 21:52:28.184993 1017129 ssh_runner.go:195] Run: systemctl --version
I0830 21:52:28.185050 1017129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
I0830 21:52:28.210912 1017129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
I0830 21:52:28.328292 1017129 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-540436 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.28.1            | b29fb62480892 | 121MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-540436  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-proxy              | v1.28.1            | 812f5241df7fd | 69.9MB |
| localhost/my-image                      | functional-540436  | 8f56a69518fe2 | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 8b6e1980b7584 | 117MB  |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b4a5a57e99492 | 59.2MB |
| docker.io/library/nginx                 | alpine             | fa0c6bb795403 | 45.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-540436 image ls --format table --alsologtostderr:
I0830 21:52:33.500999 1017537 out.go:296] Setting OutFile to fd 1 ...
I0830 21:52:33.501174 1017537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:33.501186 1017537 out.go:309] Setting ErrFile to fd 2...
I0830 21:52:33.501192 1017537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:33.501486 1017537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
I0830 21:52:33.505389 1017537 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:33.505560 1017537 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:33.506043 1017537 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
I0830 21:52:33.557562 1017537 ssh_runner.go:195] Run: systemctl --version
I0830 21:52:33.557618 1017537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
I0830 21:52:33.598738 1017537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
I0830 21:52:33.766077 1017537 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-540436 image ls --format json --alsologtostderr:
[{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"4ff53924c20d864447b9939a868e151b265b6ea0b87d4ed969a36ea3e3b2f6ec","repoDigests":["docker.io/library/a3a4d6411ade5cf0444c1bca7fc00150fa38c9c284f3bf3d64985bb28692824c-tmp@sha256:a00eeebed5224ce17d80597e2c68bf46bb8385e0790c2ce4903be3f0771d8954"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{
"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-540436"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0
b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e7
2e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45265718"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns
@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"120857550"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"117187378"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":["registry.k8s.
io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c","registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"69926807"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d","registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"59188020"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"8f56a69518fe2abadc3b3160d5d7d6c70f0fd9a7555c11bd2c
8fee1d754e7729","repoDigests":["localhost/my-image@sha256:ea38d45747f11fdaf32f90c69922bf8f3fcffc6c6d5a73be55e6575a3f24827d"],"repoTags":["localhost/my-image:functional-540436"],"size":"1640226"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-540436 image ls --format json --alsologtostderr:
I0830 21:52:32.839987 1017492 out.go:296] Setting OutFile to fd 1 ...
I0830 21:52:32.840142 1017492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:32.840150 1017492 out.go:309] Setting ErrFile to fd 2...
I0830 21:52:32.840156 1017492 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:32.840426 1017492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
I0830 21:52:32.841082 1017492 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:32.841252 1017492 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:32.841719 1017492 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
I0830 21:52:32.869364 1017492 ssh_runner.go:195] Run: systemctl --version
I0830 21:52:32.869419 1017492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
I0830 21:52:32.917475 1017492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
I0830 21:52:33.098761 1017492 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-540436 image ls --format yaml --alsologtostderr:
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0bb4ad9c0c3d2258bc97616ddb51291e5d20d6ba7d4406767f4355f56fab842d
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "59188020"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-540436
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4a0dd5abeba8e3ca67884fe9db43e8dbb299ad3199f0c6281e8a70f03ce4248f
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "117187378"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
- registry.k8s.io/kube-proxy@sha256:a9d9eaff8bae5cb45cc640255fd1490c85c3517d92f2c78bcd71dde9a12d5220
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "69926807"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790
repoTags:
- docker.io/library/nginx:alpine
size: "45265718"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:d4ad404d1c05c2f18b76f5d6936b838be07fed14b3ffefd09a6b2f0c20e3ef5c
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "120857550"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-540436 image ls --format yaml --alsologtostderr:
I0830 21:52:28.470198 1017157 out.go:296] Setting OutFile to fd 1 ...
I0830 21:52:28.470478 1017157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:28.470503 1017157 out.go:309] Setting ErrFile to fd 2...
I0830 21:52:28.470532 1017157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:28.470831 1017157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
I0830 21:52:28.471471 1017157 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:28.471664 1017157 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:28.472143 1017157 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
I0830 21:52:28.514748 1017157 ssh_runner.go:195] Run: systemctl --version
I0830 21:52:28.514804 1017157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
I0830 21:52:28.551644 1017157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
I0830 21:52:28.661683 1017157 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh pgrep buildkitd: exit status 1 (385.733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image build -t localhost/my-image:functional-540436 testdata/build --alsologtostderr
2023/08/30 21:52:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image build -t localhost/my-image:functional-540436 testdata/build --alsologtostderr: (4.129252199s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-540436 image build -t localhost/my-image:functional-540436 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4ff53924c20
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-540436
--> 8f56a69518f
Successfully tagged localhost/my-image:functional-540436
8f56a69518fe2abadc3b3160d5d7d6c70f0fd9a7555c11bd2c8fee1d754e7729
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-540436 image build -t localhost/my-image:functional-540436 testdata/build --alsologtostderr:
I0830 21:52:29.202351 1017235 out.go:296] Setting OutFile to fd 1 ...
I0830 21:52:29.203113 1017235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:29.203151 1017235 out.go:309] Setting ErrFile to fd 2...
I0830 21:52:29.203172 1017235 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 21:52:29.203468 1017235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
I0830 21:52:29.204142 1017235 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:29.204955 1017235 config.go:182] Loaded profile config "functional-540436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0830 21:52:29.205646 1017235 cli_runner.go:164] Run: docker container inspect functional-540436 --format={{.State.Status}}
I0830 21:52:29.226243 1017235 ssh_runner.go:195] Run: systemctl --version
I0830 21:52:29.226291 1017235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-540436
I0830 21:52:29.251395 1017235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34023 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/functional-540436/id_rsa Username:docker}
I0830 21:52:29.373080 1017235 build_images.go:151] Building image from path: /tmp/build.3075600379.tar
I0830 21:52:29.373212 1017235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0830 21:52:29.397538 1017235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3075600379.tar
I0830 21:52:29.402905 1017235 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3075600379.tar: stat -c "%s %y" /var/lib/minikube/build/build.3075600379.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3075600379.tar': No such file or directory
I0830 21:52:29.402953 1017235 ssh_runner.go:362] scp /tmp/build.3075600379.tar --> /var/lib/minikube/build/build.3075600379.tar (3072 bytes)
I0830 21:52:29.447604 1017235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3075600379
I0830 21:52:29.498414 1017235 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3075600379 -xf /var/lib/minikube/build/build.3075600379.tar
I0830 21:52:29.539313 1017235 crio.go:297] Building image: /var/lib/minikube/build/build.3075600379
I0830 21:52:29.539462 1017235 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-540436 /var/lib/minikube/build/build.3075600379 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0830 21:52:33.172962 1017235 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-540436 /var/lib/minikube/build/build.3075600379 --cgroup-manager=cgroupfs: (3.633441288s)
I0830 21:52:33.173024 1017235 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3075600379
I0830 21:52:33.211632 1017235 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3075600379.tar
I0830 21:52:33.244452 1017235 build_images.go:207] Built localhost/my-image:functional-540436 from /tmp/build.3075600379.tar
I0830 21:52:33.244478 1017235 build_images.go:123] succeeded building to: functional-540436
I0830 21:52:33.244482 1017235 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.041516151s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-540436
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr: (5.725411059s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-540436 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-540436 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-l9z4w" [cd927e58-f7e9-4168-9e78-4391c5273bbc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-l9z4w" [cd927e58-f7e9-4168-9e78-4391c5273bbc] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.043092514s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr: (2.879305055s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.481777021s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-540436
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image load --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr: (4.403277191s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service list -o json
functional_test.go:1493: Took "492.791026ms" to run "out/minikube-linux-arm64 -p functional-540436 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30846
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30846
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1013475: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (218.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-540436 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a6087e1a-7a1a-4535-99da-d59bcb12f8eb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a6087e1a-7a1a-4535-99da-d59bcb12f8eb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 3m38.037589275s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (218.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image save gcr.io/google-containers/addon-resizer:functional-540436 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image save gcr.io/google-containers/addon-resizer:functional-540436 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.122544607s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image rm gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-540436 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.095658143s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-540436
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 image save --daemon gcr.io/google-containers/addon-resizer:functional-540436 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-540436
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "356.979719ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "61.939374ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "359.040032ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "53.048702ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdany-port3359931863/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1693432330293266880" to /tmp/TestFunctionalparallelMountCmdany-port3359931863/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1693432330293266880" to /tmp/TestFunctionalparallelMountCmdany-port3359931863/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1693432330293266880" to /tmp/TestFunctionalparallelMountCmdany-port3359931863/001/test-1693432330293266880
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.655512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 30 21:52 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 30 21:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 30 21:52 test-1693432330293266880
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh cat /mount-9p/test-1693432330293266880
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-540436 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0e677313-2a9c-47c7-9a0f-480c6611e763] Pending
helpers_test.go:344: "busybox-mount" [0e677313-2a9c-47c7-9a0f-480c6611e763] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0e677313-2a9c-47c7-9a0f-480c6611e763] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0e677313-2a9c-47c7-9a0f-480c6611e763] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.019928761s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-540436 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdany-port3359931863/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdspecific-port3571852726/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (568.447987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdspecific-port3571852726/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-540436 ssh "sudo umount -f /mount-9p": exit status 1 (312.167391ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-540436 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdspecific-port3571852726/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-540436 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-540436 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-540436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641961158/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-540436 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.138.29 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-540436 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-540436
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-540436
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-540436
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-855931 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0830 21:53:32.358552  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.363800  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.374049  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.394290  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.434533  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.515020  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.675538  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:32.996109  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:33.636488  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:34.917632  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:37.478381  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:42.598928  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:53:52.839400  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-855931 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m28.757936443s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons enable ingress --alsologtostderr -v=5
E0830 21:54:13.319611  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons enable ingress --alsologtostderr -v=5: (11.512169997s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-855931 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-843417 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-843417 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.882515138s)
--- PASS: TestJSONOutput/start/Command (52.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-843417 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-843417 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-843417 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-843417 --output=json --user=testUser: (5.979349328s)
--- PASS: TestJSONOutput/stop/Command (5.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-154361 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-154361 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.638852ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6fe3bcd8-23cf-4b5f-8faf-4dbad297493a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-154361] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90833034-63e0-4db3-b045-41d90dac942e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17145"}}
	{"specversion":"1.0","id":"94d56f10-65b4-4310-ae8a-cd5e1d6114aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9a39e564-9cae-4372-9846-ac775594d0be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig"}}
	{"specversion":"1.0","id":"78678937-8038-452e-ac05-c468dfcc97df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube"}}
	{"specversion":"1.0","id":"e0f6a496-8dcf-4b15-884d-9a21b1a6cc68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f5985fe3-2e31-493d-be07-3afa9809dc8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9a6fa347-9d63-42fc-999b-71b5acc20d36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-154361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-154361
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-523255 --network=
E0830 21:58:32.359188  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 21:59:00.040497  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-523255 --network=: (44.281845798s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-523255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-523255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-523255: (2.120237168s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-344391 --network=bridge
E0830 21:59:18.294407  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.301344  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.311578  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.331729  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.373235  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.453648  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.613998  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:18.934297  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:19.575241  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:20.855745  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:23.416221  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:28.537076  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 21:59:38.778139  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-344391 --network=bridge: (33.896680022s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-344391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-344391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-344391: (2.021326786s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.95s)

                                                
                                    
x
+
TestKicExistingNetwork (33.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-208212 --network=existing-network
E0830 21:59:59.258347  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-208212 --network=existing-network: (31.78249451s)
helpers_test.go:175: Cleaning up "existing-network-208212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-208212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-208212: (1.961110555s)
--- PASS: TestKicExistingNetwork (33.91s)

                                                
                                    
x
+
TestKicCustomSubnet (34.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-056790 --subnet=192.168.60.0/24
E0830 22:00:40.218564  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:00:43.073425  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-056790 --subnet=192.168.60.0/24: (32.654767376s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-056790 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-056790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-056790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-056790: (2.219001816s)
--- PASS: TestKicCustomSubnet (34.89s)

                                                
                                    
x
+
TestKicStaticIP (35.8s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-469093 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-469093 --static-ip=192.168.200.200: (33.462527837s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-469093 ip
helpers_test.go:175: Cleaning up "static-ip-469093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-469093
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-469093: (2.139090425s)
--- PASS: TestKicStaticIP (35.80s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (74.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-954805 --driver=docker  --container-runtime=crio
E0830 22:02:02.138774  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:02:06.115283  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-954805 --driver=docker  --container-runtime=crio: (35.434490634s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-957685 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-957685 --driver=docker  --container-runtime=crio: (33.963692505s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-954805
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-957685
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-957685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-957685
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-957685: (2.042394567s)
helpers_test.go:175: Cleaning up "first-954805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-954805
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-954805: (2.017937946s)
--- PASS: TestMinikubeProfile (74.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-460394 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-460394 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.17210191s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-460394 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462667 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.112337997s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-460394 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-460394 --alsologtostderr -v=5: (1.711125012s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-462667
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-462667: (1.215102996s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-462667
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-462667: (7.852368667s)
--- PASS: TestMountStart/serial/RestartStopped (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-462667 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (128.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994875 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0830 22:03:32.358557  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 22:04:18.292409  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:04:45.978969  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994875 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m8.313946405s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (128.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-994875 -- rollout status deployment/busybox: (3.656100902s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-8gn7x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994875 -- exec busybox-5bc68d56bd-rdfhb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994875 -v 3 --alsologtostderr
E0830 22:05:43.072714  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-994875 -v 3 --alsologtostderr: (20.605558146s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.35s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp testdata/cp-test.txt multinode-994875:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956715306/001/cp-test_multinode-994875.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875:/home/docker/cp-test.txt multinode-994875-m02:/home/docker/cp-test_multinode-994875_multinode-994875-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test_multinode-994875_multinode-994875-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875:/home/docker/cp-test.txt multinode-994875-m03:/home/docker/cp-test_multinode-994875_multinode-994875-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test_multinode-994875_multinode-994875-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp testdata/cp-test.txt multinode-994875-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956715306/001/cp-test_multinode-994875-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m02:/home/docker/cp-test.txt multinode-994875:/home/docker/cp-test_multinode-994875-m02_multinode-994875.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test_multinode-994875-m02_multinode-994875.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m02:/home/docker/cp-test.txt multinode-994875-m03:/home/docker/cp-test_multinode-994875-m02_multinode-994875-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test_multinode-994875-m02_multinode-994875-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp testdata/cp-test.txt multinode-994875-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2956715306/001/cp-test_multinode-994875-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m03:/home/docker/cp-test.txt multinode-994875:/home/docker/cp-test_multinode-994875-m03_multinode-994875.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875 "sudo cat /home/docker/cp-test_multinode-994875-m03_multinode-994875.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 cp multinode-994875-m03:/home/docker/cp-test.txt multinode-994875-m02:/home/docker/cp-test_multinode-994875-m03_multinode-994875-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 ssh -n multinode-994875-m02 "sudo cat /home/docker/cp-test_multinode-994875-m03_multinode-994875-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-994875 node stop m03: (1.248779458s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994875 status: exit status 7 (556.367486ms)

                                                
                                                
-- stdout --
	multinode-994875
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994875-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994875-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr: exit status 7 (568.143277ms)

                                                
                                                
-- stdout --
	multinode-994875
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994875-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994875-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:06:11.283700 1063924 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:06:11.283944 1063924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:11.283971 1063924 out.go:309] Setting ErrFile to fd 2...
	I0830 22:06:11.283989 1063924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:06:11.284307 1063924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:06:11.284532 1063924 out.go:303] Setting JSON to false
	I0830 22:06:11.284651 1063924 mustload.go:65] Loading cluster: multinode-994875
	I0830 22:06:11.284729 1063924 notify.go:220] Checking for updates...
	I0830 22:06:11.285079 1063924 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:06:11.285113 1063924 status.go:255] checking status of multinode-994875 ...
	I0830 22:06:11.285678 1063924 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:06:11.305962 1063924 status.go:330] multinode-994875 host status = "Running" (err=<nil>)
	I0830 22:06:11.305987 1063924 host.go:66] Checking if "multinode-994875" exists ...
	I0830 22:06:11.306304 1063924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875
	I0830 22:06:11.323927 1063924 host.go:66] Checking if "multinode-994875" exists ...
	I0830 22:06:11.324267 1063924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:06:11.324320 1063924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875
	I0830 22:06:11.354179 1063924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34088 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875/id_rsa Username:docker}
	I0830 22:06:11.451867 1063924 ssh_runner.go:195] Run: systemctl --version
	I0830 22:06:11.457711 1063924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:06:11.471887 1063924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:06:11.560708 1063924 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-30 22:06:11.549896448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:06:11.561342 1063924 kubeconfig.go:92] found "multinode-994875" server: "https://192.168.58.2:8443"
	I0830 22:06:11.561363 1063924 api_server.go:166] Checking apiserver status ...
	I0830 22:06:11.561415 1063924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:06:11.574793 1063924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1217/cgroup
	I0830 22:06:11.587043 1063924 api_server.go:182] apiserver freezer: "8:freezer:/docker/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/crio/crio-76cc08062749798806c8c2383e99681ab6e69799e3a50f58adfcb6b432a42c92"
	I0830 22:06:11.587113 1063924 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9f440389aa1f0e1edb3413132ceff0a094431388097aea03d597a527064c8544/crio/crio-76cc08062749798806c8c2383e99681ab6e69799e3a50f58adfcb6b432a42c92/freezer.state
	I0830 22:06:11.598075 1063924 api_server.go:204] freezer state: "THAWED"
	I0830 22:06:11.598101 1063924 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0830 22:06:11.607528 1063924 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0830 22:06:11.607555 1063924 status.go:421] multinode-994875 apiserver status = Running (err=<nil>)
	I0830 22:06:11.607567 1063924 status.go:257] multinode-994875 status: &{Name:multinode-994875 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 22:06:11.607585 1063924 status.go:255] checking status of multinode-994875-m02 ...
	I0830 22:06:11.607947 1063924 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Status}}
	I0830 22:06:11.625720 1063924 status.go:330] multinode-994875-m02 host status = "Running" (err=<nil>)
	I0830 22:06:11.625767 1063924 host.go:66] Checking if "multinode-994875-m02" exists ...
	I0830 22:06:11.626076 1063924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994875-m02
	I0830 22:06:11.644665 1063924 host.go:66] Checking if "multinode-994875-m02" exists ...
	I0830 22:06:11.645081 1063924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:06:11.645156 1063924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994875-m02
	I0830 22:06:11.663762 1063924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34093 SSHKeyPath:/home/jenkins/minikube-integration/17145-984449/.minikube/machines/multinode-994875-m02/id_rsa Username:docker}
	I0830 22:06:11.763915 1063924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:06:11.778997 1063924 status.go:257] multinode-994875-m02 status: &{Name:multinode-994875-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0830 22:06:11.779033 1063924 status.go:255] checking status of multinode-994875-m03 ...
	I0830 22:06:11.779361 1063924 cli_runner.go:164] Run: docker container inspect multinode-994875-m03 --format={{.State.Status}}
	I0830 22:06:11.797725 1063924 status.go:330] multinode-994875-m03 host status = "Stopped" (err=<nil>)
	I0830 22:06:11.797750 1063924 status.go:343] host is not running, skipping remaining checks
	I0830 22:06:11.797758 1063924 status.go:257] multinode-994875-m03 status: &{Name:multinode-994875-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-994875 node start m03 --alsologtostderr: (11.889467604s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994875
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-994875
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-994875: (25.020586799s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994875 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994875 --wait=true -v=8 --alsologtostderr: (1m34.174170434s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994875
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-994875 node delete m03: (4.389395676s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 stop
E0830 22:08:32.359115  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-994875 stop: (23.895701601s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994875 status: exit status 7 (93.66204ms)

                                                
                                                
-- stdout --
	multinode-994875
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994875-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr: exit status 7 (88.322583ms)

                                                
                                                
-- stdout --
	multinode-994875
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994875-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:08:53.181061 1072077 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:08:53.181268 1072077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:08:53.181292 1072077 out.go:309] Setting ErrFile to fd 2...
	I0830 22:08:53.181312 1072077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:08:53.181601 1072077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:08:53.181810 1072077 out.go:303] Setting JSON to false
	I0830 22:08:53.181929 1072077 mustload.go:65] Loading cluster: multinode-994875
	I0830 22:08:53.182009 1072077 notify.go:220] Checking for updates...
	I0830 22:08:53.182368 1072077 config.go:182] Loaded profile config "multinode-994875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:08:53.182404 1072077 status.go:255] checking status of multinode-994875 ...
	I0830 22:08:53.182893 1072077 cli_runner.go:164] Run: docker container inspect multinode-994875 --format={{.State.Status}}
	I0830 22:08:53.202341 1072077 status.go:330] multinode-994875 host status = "Stopped" (err=<nil>)
	I0830 22:08:53.202368 1072077 status.go:343] host is not running, skipping remaining checks
	I0830 22:08:53.202375 1072077 status.go:257] multinode-994875 status: &{Name:multinode-994875 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 22:08:53.202397 1072077 status.go:255] checking status of multinode-994875-m02 ...
	I0830 22:08:53.202714 1072077 cli_runner.go:164] Run: docker container inspect multinode-994875-m02 --format={{.State.Status}}
	I0830 22:08:53.220982 1072077 status.go:330] multinode-994875-m02 host status = "Stopped" (err=<nil>)
	I0830 22:08:53.221011 1072077 status.go:343] host is not running, skipping remaining checks
	I0830 22:08:53.221019 1072077 status.go:257] multinode-994875-m02 status: &{Name:multinode-994875-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994875 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0830 22:09:18.292369  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:09:55.401614  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994875 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.253227454s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994875 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994875
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994875-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-994875-m02 --driver=docker  --container-runtime=crio: exit status 14 (90.937096ms)

                                                
                                                
-- stdout --
	* [multinode-994875-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-994875-m02' is duplicated with machine name 'multinode-994875-m02' in profile 'multinode-994875'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994875-m03 --driver=docker  --container-runtime=crio
E0830 22:10:43.072563  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994875-m03 --driver=docker  --container-runtime=crio: (33.587380097s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994875
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-994875: exit status 80 (362.042515ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-994875
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-994875-m03 already exists in multinode-994875-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-994875-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-994875-m03: (2.040232517s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.15s)

                                                
                                    
x
+
TestPreload (171.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-970199 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-970199 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.42090005s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-970199 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-970199 image pull gcr.io/k8s-minikube/busybox: (2.072372871s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-970199
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-970199: (5.842627732s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-970199 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0830 22:13:32.359051  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-970199 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m16.468525147s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-970199 image list
helpers_test.go:175: Cleaning up "test-preload-970199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-970199
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-970199: (2.405583307s)
--- PASS: TestPreload (171.46s)

                                                
                                    
x
+
TestScheduledStopUnix (109.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-358973 --memory=2048 --driver=docker  --container-runtime=crio
E0830 22:14:18.291730  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-358973 --memory=2048 --driver=docker  --container-runtime=crio: (33.128058499s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-358973 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-358973 -n scheduled-stop-358973
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-358973 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-358973 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-358973 -n scheduled-stop-358973
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-358973
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-358973 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-358973
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-358973: exit status 7 (85.489028ms)

                                                
                                                
-- stdout --
	scheduled-stop-358973
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-358973 -n scheduled-stop-358973
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-358973 -n scheduled-stop-358973: exit status 7 (78.257003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-358973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-358973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-358973: (5.159178257s)
--- PASS: TestScheduledStopUnix (109.95s)

                                                
                                    
x
+
TestInsufficientStorage (13.64s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-667249 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0830 22:15:41.341581  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:15:43.073368  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-667249 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.057853041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e8d938e-98cd-4a9a-8af7-ad12294b0cf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-667249] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d14707e-2190-418a-9587-16a24026a342","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17145"}}
	{"specversion":"1.0","id":"3facdb45-8b33-4fc4-b982-9f95e5fb915c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7fa4c600-0032-4f66-95a0-ea4893438121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig"}}
	{"specversion":"1.0","id":"dd75b0bb-394c-45cc-b86e-eb5ac58726c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube"}}
	{"specversion":"1.0","id":"cb830cf0-688d-45ad-8498-1dd86f019670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dafbfee9-cf96-4617-83a4-cfa0c528f4b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d6fe83f5-af90-4fa1-9e2b-3510aff23f50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"16b49d70-553c-4545-9fb8-666ebf662adb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"294b1df7-a6c8-49f8-bc56-6a3cb00d446d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4145e13f-c36c-47a6-9d4d-dba0bdf500af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"afa505f5-dac2-4a74-a7be-5d3e9a7bb6d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-667249 in cluster insufficient-storage-667249","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e60d2782-9d2c-49e4-abf6-d6275a1426c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee5194c3-8547-43ae-a707-75dfedb00da6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"37a431c4-2895-4bf3-bc5a-70d394a97762","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-667249 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-667249 --output=json --layout=cluster: exit status 7 (328.992181ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-667249","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-667249","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:15:49.660325 1088729 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-667249" does not appear in /home/jenkins/minikube-integration/17145-984449/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-667249 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-667249 --output=json --layout=cluster: exit status 7 (320.192542ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-667249","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-667249","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 22:15:49.984039 1088785 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-667249" does not appear in /home/jenkins/minikube-integration/17145-984449/kubeconfig
	E0830 22:15:49.996041 1088785 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/insufficient-storage-667249/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-667249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-667249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-667249: (1.932422632s)
--- PASS: TestInsufficientStorage (13.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (375.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0830 22:23:32.359090  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.413325539s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-334175
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-334175: (1.273588712s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-334175 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-334175 status --format={{.Host}}: exit status 7 (77.157427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0830 22:24:18.291680  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:25:43.073358  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m43.772559877s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-334175 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (89.76868ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-334175] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-334175
	    minikube start -p kubernetes-upgrade-334175 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3341752 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-334175 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334175 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.883877903s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-334175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-334175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-334175: (2.236637937s)
--- PASS: TestKubernetesUpgrade (375.84s)

                                                
                                    
x
+
TestPause/serial/Start (89.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-183284 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-183284 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m29.201386502s)
--- PASS: TestPause/serial/Start (89.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (88.998695ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-557281] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-557281 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-557281 --driver=docker  --container-runtime=crio: (44.022654201s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-557281 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --driver=docker  --container-runtime=crio: (5.769877677s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-557281 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-557281 status -o json: exit status 2 (354.414769ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-557281","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-557281
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-557281: (1.997195581s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-557281 --no-kubernetes --driver=docker  --container-runtime=crio: (9.728239068s)
--- PASS: TestNoKubernetes/serial/Start (9.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-557281 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-557281 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.865577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-557281
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-557281: (1.237379381s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-557281 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-557281 --driver=docker  --container-runtime=crio: (7.859994052s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-557281 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-557281 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.270568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-043874 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-043874 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (206.2045ms)

                                                
                                                
-- stdout --
	* [false-043874] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17145
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 22:17:10.584862 1097676 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:17:10.585050 1097676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:17:10.585057 1097676 out.go:309] Setting ErrFile to fd 2...
	I0830 22:17:10.585063 1097676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:17:10.585460 1097676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17145-984449/.minikube/bin
	I0830 22:17:10.585921 1097676 out.go:303] Setting JSON to false
	I0830 22:17:10.587011 1097676 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25165,"bootTime":1693408666,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0830 22:17:10.587093 1097676 start.go:138] virtualization:  
	I0830 22:17:10.591470 1097676 out.go:177] * [false-043874] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:17:10.593715 1097676 out.go:177]   - MINIKUBE_LOCATION=17145
	I0830 22:17:10.595745 1097676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:17:10.593885 1097676 notify.go:220] Checking for updates...
	I0830 22:17:10.600351 1097676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17145-984449/kubeconfig
	I0830 22:17:10.602473 1097676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17145-984449/.minikube
	I0830 22:17:10.604232 1097676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:17:10.606568 1097676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:17:10.609008 1097676 config.go:182] Loaded profile config "pause-183284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0830 22:17:10.609146 1097676 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:17:10.634287 1097676 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:17:10.634384 1097676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:17:10.726639 1097676 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-30 22:17:10.714231349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215113728 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:17:10.726751 1097676 docker.go:294] overlay module found
	I0830 22:17:10.729249 1097676 out.go:177] * Using the docker driver based on user configuration
	I0830 22:17:10.731132 1097676 start.go:298] selected driver: docker
	I0830 22:17:10.731145 1097676 start.go:902] validating driver "docker" against <nil>
	I0830 22:17:10.731159 1097676 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:17:10.733913 1097676 out.go:177] 
	W0830 22:17:10.736061 1097676 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0830 22:17:10.738424 1097676 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-043874 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-183284
contexts:
- context:
cluster: pause-183284
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-183284
name: pause-183284
current-context: pause-183284
kind: Config
preferences: {}
users:
- name: pause-183284
user:
client-certificate: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.crt
client-key: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-043874

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-043874"

                                                
                                                
----------------------- debugLogs end: false-043874 [took: 3.439389458s] --------------------------------
helpers_test.go:175: Cleaning up "false-043874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-043874
--- PASS: TestNetworkPlugins/group/false (3.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-183284 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-183284 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.568569206s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.61s)

                                                
                                    
x
+
TestPause/serial/Pause (1.14s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-183284 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-183284 --alsologtostderr -v=5: (1.141324613s)
--- PASS: TestPause/serial/Pause (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-183284 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-183284 --output=json --layout=cluster: exit status 2 (459.657377ms)

                                                
                                                
-- stdout --
	{"Name":"pause-183284","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-183284","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-183284 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.37s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-183284 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-183284 --alsologtostderr -v=5: (1.366918188s)
--- PASS: TestPause/serial/PauseAgain (1.37s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-183284 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-183284 --alsologtostderr -v=5: (3.186966268s)
--- PASS: TestPause/serial/DeletePaused (3.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-183284
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-183284: exit status 1 (18.389052ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-183284: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-836210
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.163281347s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vwb77" [9b5d98e7-08a8-4bcc-81b2-86986aac6494] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 22:28:32.358437  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vwb77" [9b5d98e7-08a8-4bcc-81b2-86986aac6494] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.012827405s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m29.551732399s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0830 22:29:18.291506  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m20.355629315s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p9wzs" [1fe8f6dc-637a-402c-9c86-efe620791dcd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.036919594s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jhggj" [c50e36fd-8ffe-4de6-8207-a8de2d2ea3b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jhggj" [c50e36fd-8ffe-4de6-8207-a8de2d2ea3b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.012919892s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d9hxd" [d17ec196-5b97-4c54-9716-f00479408385] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.044869427s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lrw8g" [72137e43-5d30-44a5-ba48-8a447b0e2cf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 22:30:43.073342  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lrw8g" [72137e43-5d30-44a5-ba48-8a447b0e2cf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011013876s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m14.662864806s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0830 22:32:21.341799  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m32.316638272s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2sz6p" [ff505b94-dcd7-4419-9a24-d65d94c5aba9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2sz6p" [ff505b94-dcd7-4419-9a24-d65d94c5aba9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010196983s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6ql92" [85b52a12-f4a7-4539-8359-e122f1b34e43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6ql92" [85b52a12-f4a7-4539-8359-e122f1b34e43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.014052768s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.24362812s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0830 22:33:38.096394  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:33:48.336930  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:34:08.817231  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-043874 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m31.965744749s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k54v5" [2ee00b6d-f21a-49ba-9745-68b420529b51] Running
E0830 22:34:18.292240  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.0514182s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5kgp8" [163e9f8f-db1e-4ad7-a449-165aae80b41b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5kgp8" [163e9f8f-db1e-4ad7-a449-165aae80b41b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.010669559s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-446594 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-446594 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m22.994177164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-043874 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-043874 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fz5v4" [956295b0-0623-4743-8873-85970690557b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fz5v4" [956295b0-0623-4743-8873-85970690557b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.012752524s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-043874 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-043874 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)
E0830 22:52:54.994244  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-933596 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:35:48.698320  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:35:57.708152  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:36:09.179383  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:36:11.699034  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:36:18.189192  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:36:50.139600  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-933596 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m5.509441093s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-933596 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07bfb9e4-517a-43cf-978f-59f975f45bf3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07bfb9e4-517a-43cf-978f-59f975f45bf3] Running
E0830 22:36:59.150029  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.02767587s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-933596 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-933596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-933596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051043283s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-933596 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-933596 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-933596 --alsologtostderr -v=3: (12.123562248s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-933596 -n no-preload-933596
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-933596 -n no-preload-933596: exit status 7 (71.125073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-933596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-933596 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-933596 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m49.046115744s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-933596 -n no-preload-933596
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-446594 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af810b33-9729-4854-9c5c-18139f84c62f] Pending
helpers_test.go:344: "busybox" [af810b33-9729-4854-9c5c-18139f84c62f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af810b33-9729-4854-9c5c-18139f84c62f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.05201866s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-446594 exec busybox -- /bin/sh -c "ulimit -n"
E0830 22:37:27.923602  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:27.928948  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:27.939184  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:27.960411  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:28.000649  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-446594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0830 22:37:28.081606  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:28.242033  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:28.562965  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-446594 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-446594 --alsologtostderr -v=3
E0830 22:37:29.204086  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:30.484427  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:33.044664  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:38.165177  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-446594 --alsologtostderr -v=3: (12.169025915s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446594 -n old-k8s-version-446594
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446594 -n old-k8s-version-446594: exit status 7 (103.278114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-446594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (436s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-446594 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0830 22:37:48.406030  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:37:54.994411  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:54.999896  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.010114  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.030378  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.070631  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.150951  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.311679  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:55.632092  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:56.272973  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:37:57.553553  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:38:00.114243  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:38:05.235114  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:38:08.886844  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:38:12.059911  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:38:15.475358  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:38:21.070393  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:38:27.855739  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:38:32.358802  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 22:38:35.955747  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:38:49.847964  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:38:55.539822  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:39:14.208333  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.213619  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.223873  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.244173  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.284370  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.364720  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.525078  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:14.845654  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:15.486406  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:16.767458  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:16.916760  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:39:18.292098  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:39:19.328567  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:24.449761  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:34.689919  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:39:55.170588  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:40:08.271519  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.276783  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.287104  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.307480  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.347797  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.428123  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.588479  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:08.909328  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:09.550082  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:10.831058  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:11.769018  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:40:13.391852  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:18.512633  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:28.216869  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:40:28.752782  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:36.131176  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:40:37.222925  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:40:38.837375  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:40:43.082308  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
E0830 22:40:49.233355  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:40:55.900320  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:41:04.910914  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:41:30.194370  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:41:58.051391  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:42:27.923472  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:42:52.115259  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:42:54.994691  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:42:55.609515  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-446594 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m15.604146204s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-446594 -n old-k8s-version-446594
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (436.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2wd9z" [995de147-1c9d-4d38-b339-0454bad11e19] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2wd9z" [995de147-1c9d-4d38-b339-0454bad11e19] Running
E0830 22:43:15.402916  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.03297093s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2wd9z" [995de147-1c9d-4d38-b339-0454bad11e19] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012240282s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-933596 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-933596 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-933596 --alsologtostderr -v=1
E0830 22:43:22.678470  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-933596 --alsologtostderr -v=1: (1.222887951s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-933596 -n no-preload-933596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-933596 -n no-preload-933596: exit status 2 (513.919028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-933596 -n no-preload-933596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-933596 -n no-preload-933596: exit status 2 (469.751491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-933596 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-933596 --alsologtostderr -v=1: (1.177500182s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-933596 -n no-preload-933596
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-933596 -n no-preload-933596
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-804124 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:43:32.359026  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 22:44:14.208138  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:44:18.292106  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:44:41.891989  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-804124 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (1m27.465904674s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-brc8d" [122ebc0f-a727-4550-a6b7-ef6ccc54481d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025644436s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804124 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0a0af74e-980c-4947-bb2f-cbb3f0c9095c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0a0af74e-980c-4947-bb2f-cbb3f0c9095c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.035426307s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-804124 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-brc8d" [122ebc0f-a727-4550-a6b7-ef6ccc54481d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018536645s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-446594 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-804124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-804124 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05787289s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-804124 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-804124 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-804124 --alsologtostderr -v=3: (12.588632487s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-446594 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-446594 --alsologtostderr -v=1
E0830 22:45:08.272032  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-446594 --alsologtostderr -v=1: (1.109232781s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446594 -n old-k8s-version-446594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446594 -n old-k8s-version-446594: exit status 2 (385.226801ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446594 -n old-k8s-version-446594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446594 -n old-k8s-version-446594: exit status 2 (335.675385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-446594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-446594 -n old-k8s-version-446594
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-446594 -n old-k8s-version-446594
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-931694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-931694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (55.274192705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804124 -n embed-certs-804124
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804124 -n embed-certs-804124: exit status 7 (123.424605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-804124 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-804124 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:45:28.217232  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:45:35.956272  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:45:37.222526  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:45:43.072879  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-804124 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m47.13015284s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-804124 -n embed-certs-804124
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931694 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [190eb709-3851-4797-8de6-010a63aeb70a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [190eb709-3851-4797-8de6-010a63aeb70a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.034526057s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-931694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-931694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-931694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.184076668s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-931694 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-931694 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-931694 --alsologtostderr -v=3: (12.086289972s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694: exit status 7 (78.175814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-931694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (351.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-931694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:46:53.760817  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:53.766041  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:53.782718  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:53.803248  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:53.843525  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:53.923797  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:54.085748  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:54.406307  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:55.046452  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:56.327137  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:46:58.887565  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:47:04.008674  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:47:14.249589  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:47:18.810254  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:18.815583  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:18.825887  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:18.846185  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:18.886525  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:18.966906  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:19.127261  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:19.447733  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:20.090173  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:21.370988  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:23.931608  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:27.923567  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
E0830 22:47:29.052650  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:34.729775  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:47:39.292873  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:47:54.994468  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/enable-default-cni-043874/client.crt: no such file or directory
E0830 22:47:59.773224  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:48:15.690320  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:48:27.856231  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:48:32.359126  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/functional-540436/client.crt: no such file or directory
E0830 22:48:40.734040  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:49:01.342529  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:49:14.207928  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/flannel-043874/client.crt: no such file or directory
E0830 22:49:18.292391  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/ingress-addon-legacy-855931/client.crt: no such file or directory
E0830 22:49:37.611043  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:49:50.901021  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/auto-043874/client.crt: no such file or directory
E0830 22:50:02.654310  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:50:08.271858  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/bridge-043874/client.crt: no such file or directory
E0830 22:50:28.217019  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:50:37.222680  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:50:43.073294  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-931694 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m51.020855594s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (351.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kx9z9" [fca5a86c-271d-4d1d-a8d2-b206181cad55] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kx9z9" [fca5a86c-271d-4d1d-a8d2-b206181cad55] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.033289992s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kx9z9" [fca5a86c-271d-4d1d-a8d2-b206181cad55] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015845374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-804124 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-804124 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-804124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804124 -n embed-certs-804124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804124 -n embed-certs-804124: exit status 2 (363.851242ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804124 -n embed-certs-804124
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804124 -n embed-certs-804124: exit status 2 (376.801735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-804124 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-804124 -n embed-certs-804124
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-804124 -n embed-certs-804124
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-688667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:51:51.261250  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/calico-043874/client.crt: no such file or directory
E0830 22:51:53.761007  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
E0830 22:52:00.272283  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/kindnet-043874/client.crt: no such file or directory
E0830 22:52:06.116553  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/addons-934429/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-688667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (45.058514155s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-688667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-688667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.340940927s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-688667 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-688667 --alsologtostderr -v=3: (1.270629138s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688667 -n newest-cni-688667
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688667 -n newest-cni-688667: exit status 7 (97.997546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-688667 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-688667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0830 22:52:18.810583  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
E0830 22:52:21.451788  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/no-preload-933596/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-688667 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (33.466040395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-688667 -n newest-cni-688667
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sl4f5" [f4ba5073-fb78-41ea-b58e-8883de4a097e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0830 22:52:27.923533  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/custom-flannel-043874/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sl4f5" [f4ba5073-fb78-41ea-b58e-8883de4a097e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.034460432s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sl4f5" [f4ba5073-fb78-41ea-b58e-8883de4a097e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011006659s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-931694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-931694 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-931694 --alsologtostderr -v=1
E0830 22:52:46.495026  989825 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/old-k8s-version-446594/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-931694 --alsologtostderr -v=1: (1.138761973s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694: exit status 2 (495.748118ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694: exit status 2 (739.213348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-931694 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-931694 --alsologtostderr -v=1: (1.383853114s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-931694 -n default-k8s-diff-port-931694
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-688667 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-688667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688667 -n newest-cni-688667
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688667 -n newest-cni-688667: exit status 2 (463.643617ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688667 -n newest-cni-688667
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688667 -n newest-cni-688667: exit status 2 (470.439934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-688667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-688667 -n newest-cni-688667
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-688667 -n newest-cni-688667
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.56s)

                                                
                                    

Test skip (29/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-195060 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-195060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-195060
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-043874 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-183284
contexts:
- context:
cluster: pause-183284
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-183284
name: pause-183284
current-context: pause-183284
kind: Config
preferences: {}
users:
- name: pause-183284
user:
client-certificate: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.crt
client-key: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-043874

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-043874"

                                                
                                                
----------------------- debugLogs end: kubenet-043874 [took: 3.504000247s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-043874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-043874
--- SKIP: TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-043874 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-043874" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17145-984449/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-183284
contexts:
- context:
cluster: pause-183284
extensions:
- extension:
last-update: Wed, 30 Aug 2023 22:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-183284
name: pause-183284
current-context: pause-183284
kind: Config
preferences: {}
users:
- name: pause-183284
user:
client-certificate: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.crt
client-key: /home/jenkins/minikube-integration/17145-984449/.minikube/profiles/pause-183284/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-043874

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-043874" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-043874"

                                                
                                                
----------------------- debugLogs end: cilium-043874 [took: 4.058620728s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-043874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-043874
--- SKIP: TestNetworkPlugins/group/cilium (4.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-956851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-956851
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard