Test Report: Docker_Linux_crio_arm64 17822

                    
                      1b14f6e8a127ccddfb64acb15c203e20bb49b800:2023-12-19:32341
                    
                

Test fail (7/315)

Order failed test Duration
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
35 TestAddons/parallel/Ingress 168.26
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.53
216 TestMultiNode/serial/PingHostFrom2Pods 4.01
238 TestRunningBinaryUpgrade 77.19
241 TestMissingContainerUpgrade 185.33
253 TestStoppedBinaryUpgrade/Upgrade 99.37
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/pause_3.9" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/pause_3.9: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
--- FAIL: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (168.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-045387 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-045387 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-045387 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f7958cbf-1bef-4597-8b6f-f30afa3c9618] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f7958cbf-1bef-4597-8b6f-f30afa3c9618] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003512034s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-045387 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.562123472s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-045387 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.055184552s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-045387 addons disable ingress-dns --alsologtostderr -v=1: (1.574642678s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-045387 addons disable ingress --alsologtostderr -v=1: (7.805820752s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-045387
helpers_test.go:235: (dbg) docker inspect addons-045387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e",
	        "Created": "2023-12-18T23:32:26.098585476Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 818437,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:32:26.400663728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e/hosts",
	        "LogPath": "/var/lib/docker/containers/5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e/5145b26458f9873f166c1bde0af25e17d20d6ecb49a2fd72033a3fbc46f26e3e-json.log",
	        "Name": "/addons-045387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-045387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-045387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8f5a13d2cbc72f8de24c37341431cf555294cc47a2b8be7aa0e98ebd2060379f-init/diff:/var/lib/docker/overlay2/db874852d391376facd52e960a3e68faa10fa2be9d9e14dbf2dda2d1f908e37e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f5a13d2cbc72f8de24c37341431cf555294cc47a2b8be7aa0e98ebd2060379f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f5a13d2cbc72f8de24c37341431cf555294cc47a2b8be7aa0e98ebd2060379f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f5a13d2cbc72f8de24c37341431cf555294cc47a2b8be7aa0e98ebd2060379f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-045387",
	                "Source": "/var/lib/docker/volumes/addons-045387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-045387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-045387",
	                "name.minikube.sigs.k8s.io": "addons-045387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c46d11c8cf134ab80f890806c8719ebb3a3b304dc9f85374970c378ac8008bb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6c46d11c8cf1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-045387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5145b26458f9",
	                        "addons-045387"
	                    ],
	                    "NetworkID": "deb97de11cbd1155ffd0354fb5de75411302221ef03e14cce30aec1e8349e8aa",
	                    "EndpointID": "6b956ce2f5d70e8f48d377eb71fc1669e7ede44e0b9ea17fa70b47524f5a0a3f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-045387 -n addons-045387
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-045387 logs -n 25: (1.588071643s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC | 18 Dec 23 23:31 UTC |
	| delete  | -p download-only-162657                                                                     | download-only-162657   | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC | 18 Dec 23 23:32 UTC |
	| delete  | -p download-only-162657                                                                     | download-only-162657   | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC | 18 Dec 23 23:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-062537 | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC |                     |
	|         | download-docker-062537                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-062537                                                                   | download-docker-062537 | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC | 18 Dec 23 23:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-090650   | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC |                     |
	|         | binary-mirror-090650                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33079                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-090650                                                                     | binary-mirror-090650   | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC | 18 Dec 23 23:32 UTC |
	| addons  | disable dashboard -p                                                                        | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC |                     |
	|         | addons-045387                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC |                     |
	|         | addons-045387                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-045387 --wait=true                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:32 UTC | 18 Dec 23 23:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-045387 ip                                                                            | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:34 UTC | 18 Dec 23 23:34 UTC |
	| addons  | addons-045387 addons disable                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:34 UTC | 18 Dec 23 23:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-045387 addons                                                                        | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:34 UTC | 18 Dec 23 23:34 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC | 18 Dec 23 23:35 UTC |
	|         | addons-045387                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-045387 ssh curl -s                                                                   | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-045387 addons                                                                        | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC | 18 Dec 23 23:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-045387 addons                                                                        | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC | 18 Dec 23 23:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-045387 ssh cat                                                                       | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC | 18 Dec 23 23:35 UTC |
	|         | /opt/local-path-provisioner/pvc-690c8abb-703f-4bf8-a4e3-6af75a8294fd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-045387 addons disable                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:35 UTC | 18 Dec 23 23:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | -p addons-045387                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | addons-045387                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:36 UTC | 18 Dec 23 23:36 UTC |
	|         | -p addons-045387                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-045387 ip                                                                            | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:37 UTC | 18 Dec 23 23:37 UTC |
	| addons  | addons-045387 addons disable                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:37 UTC | 18 Dec 23 23:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-045387 addons disable                                                                | addons-045387          | jenkins | v1.32.0 | 18 Dec 23 23:37 UTC | 18 Dec 23 23:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:32:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:32:02.066339  817975 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:32:02.066462  817975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:32:02.066476  817975 out.go:309] Setting ErrFile to fd 2...
	I1218 23:32:02.066482  817975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:32:02.066746  817975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:32:02.067260  817975 out.go:303] Setting JSON to false
	I1218 23:32:02.068147  817975 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15264,"bootTime":1702927058,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:32:02.068225  817975 start.go:138] virtualization:  
	I1218 23:32:02.071649  817975 out.go:177] * [addons-045387] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:32:02.074503  817975 notify.go:220] Checking for updates...
	I1218 23:32:02.074510  817975 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:32:02.077440  817975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:32:02.080377  817975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:32:02.083025  817975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:32:02.085672  817975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:32:02.088355  817975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:32:02.091339  817975 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:32:02.114987  817975 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:32:02.115170  817975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:32:02.200441  817975 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 23:32:02.190832789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:32:02.200548  817975 docker.go:295] overlay module found
	I1218 23:32:02.203422  817975 out.go:177] * Using the docker driver based on user configuration
	I1218 23:32:02.206193  817975 start.go:298] selected driver: docker
	I1218 23:32:02.206212  817975 start.go:902] validating driver "docker" against <nil>
	I1218 23:32:02.206225  817975 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:32:02.206900  817975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:32:02.274041  817975 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 23:32:02.264010529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:32:02.274209  817975 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:32:02.274447  817975 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:32:02.277046  817975 out.go:177] * Using Docker driver with root privileges
	I1218 23:32:02.279683  817975 cni.go:84] Creating CNI manager for ""
	I1218 23:32:02.279706  817975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:32:02.279719  817975 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:32:02.279734  817975 start_flags.go:323] config:
	{Name:addons-045387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-045387 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:32:02.282798  817975 out.go:177] * Starting control plane node addons-045387 in cluster addons-045387
	I1218 23:32:02.285341  817975 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:32:02.287943  817975 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:32:02.290562  817975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:32:02.290619  817975 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1218 23:32:02.290641  817975 cache.go:56] Caching tarball of preloaded images
	I1218 23:32:02.290659  817975 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:32:02.290741  817975 preload.go:174] Found /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1218 23:32:02.290752  817975 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1218 23:32:02.291097  817975 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/config.json ...
	I1218 23:32:02.291118  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/config.json: {Name:mk82af4a13ce111ad89169873d8672c8239e6450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:02.308275  817975 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:32:02.308430  817975 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:32:02.308454  817975 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:32:02.308462  817975 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:32:02.308470  817975 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:32:02.308475  817975 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1218 23:32:18.447050  817975 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1218 23:32:18.447090  817975 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:32:18.447158  817975 start.go:365] acquiring machines lock for addons-045387: {Name:mk5e56e9b557dc7aa8c664f2c72c33d6afe05266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:32:18.447276  817975 start.go:369] acquired machines lock for "addons-045387" in 97.164µs
	I1218 23:32:18.447313  817975 start.go:93] Provisioning new machine with config: &{Name:addons-045387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-045387 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:32:18.447433  817975 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:32:18.449478  817975 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1218 23:32:18.449818  817975 start.go:159] libmachine.API.Create for "addons-045387" (driver="docker")
	I1218 23:32:18.449852  817975 client.go:168] LocalClient.Create starting
	I1218 23:32:18.449985  817975 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem
	I1218 23:32:18.817317  817975 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem
	I1218 23:32:19.725915  817975 cli_runner.go:164] Run: docker network inspect addons-045387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:32:19.745096  817975 cli_runner.go:211] docker network inspect addons-045387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:32:19.745185  817975 network_create.go:281] running [docker network inspect addons-045387] to gather additional debugging logs...
	I1218 23:32:19.745213  817975 cli_runner.go:164] Run: docker network inspect addons-045387
	W1218 23:32:19.769885  817975 cli_runner.go:211] docker network inspect addons-045387 returned with exit code 1
	I1218 23:32:19.769918  817975 network_create.go:284] error running [docker network inspect addons-045387]: docker network inspect addons-045387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-045387 not found
	I1218 23:32:19.769947  817975 network_create.go:286] output of [docker network inspect addons-045387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-045387 not found
	
	** /stderr **
	I1218 23:32:19.770063  817975 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:32:19.788542  817975 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40000f9f50}
	I1218 23:32:19.788580  817975 network_create.go:124] attempt to create docker network addons-045387 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 23:32:19.788642  817975 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-045387 addons-045387
	I1218 23:32:19.857589  817975 network_create.go:108] docker network addons-045387 192.168.49.0/24 created
	I1218 23:32:19.857629  817975 kic.go:121] calculated static IP "192.168.49.2" for the "addons-045387" container
	I1218 23:32:19.857722  817975 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:32:19.874496  817975 cli_runner.go:164] Run: docker volume create addons-045387 --label name.minikube.sigs.k8s.io=addons-045387 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:32:19.893874  817975 oci.go:103] Successfully created a docker volume addons-045387
	I1218 23:32:19.893976  817975 cli_runner.go:164] Run: docker run --rm --name addons-045387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-045387 --entrypoint /usr/bin/test -v addons-045387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:32:21.794654  817975 cli_runner.go:217] Completed: docker run --rm --name addons-045387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-045387 --entrypoint /usr/bin/test -v addons-045387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.900632847s)
	I1218 23:32:21.794686  817975 oci.go:107] Successfully prepared a docker volume addons-045387
	I1218 23:32:21.794706  817975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:32:21.794727  817975 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:32:21.794821  817975 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-045387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:32:26.008199  817975 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-045387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.213333556s)
	I1218 23:32:26.008238  817975 kic.go:203] duration metric: took 4.213511 seconds to extract preloaded images to volume
	W1218 23:32:26.008412  817975 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:32:26.008541  817975 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:32:26.081393  817975 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-045387 --name addons-045387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-045387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-045387 --network addons-045387 --ip 192.168.49.2 --volume addons-045387:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:32:26.408355  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Running}}
	I1218 23:32:26.437288  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:32:26.469514  817975 cli_runner.go:164] Run: docker exec addons-045387 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:32:26.537582  817975 oci.go:144] the created container "addons-045387" has a running status.
	I1218 23:32:26.537608  817975 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa...
	I1218 23:32:27.493940  817975 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:32:27.520448  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:32:27.545132  817975 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:32:27.545158  817975 kic_runner.go:114] Args: [docker exec --privileged addons-045387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:32:27.613318  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:32:27.632155  817975 machine.go:88] provisioning docker machine ...
	I1218 23:32:27.632190  817975 ubuntu.go:169] provisioning hostname "addons-045387"
	I1218 23:32:27.632261  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:27.658824  817975 main.go:141] libmachine: Using SSH client type: native
	I1218 23:32:27.659540  817975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1218 23:32:27.659558  817975 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-045387 && echo "addons-045387" | sudo tee /etc/hostname
	I1218 23:32:27.826620  817975 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-045387
	
	I1218 23:32:27.826709  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:27.845109  817975 main.go:141] libmachine: Using SSH client type: native
	I1218 23:32:27.845538  817975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1218 23:32:27.845562  817975 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-045387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-045387/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-045387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:32:27.993122  817975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:32:27.993150  817975 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1218 23:32:27.993169  817975 ubuntu.go:177] setting up certificates
	I1218 23:32:27.993178  817975 provision.go:83] configureAuth start
	I1218 23:32:27.993236  817975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-045387
	I1218 23:32:28.015617  817975 provision.go:138] copyHostCerts
	I1218 23:32:28.015714  817975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1218 23:32:28.015851  817975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1218 23:32:28.015928  817975 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1218 23:32:28.016069  817975 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.addons-045387 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-045387]
	I1218 23:32:28.352071  817975 provision.go:172] copyRemoteCerts
	I1218 23:32:28.352156  817975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:32:28.352207  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:28.372617  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:32:28.478584  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 23:32:28.507450  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 23:32:28.535867  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 23:32:28.564618  817975 provision.go:86] duration metric: configureAuth took 571.426296ms
	I1218 23:32:28.564646  817975 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:32:28.564835  817975 config.go:182] Loaded profile config "addons-045387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:32:28.564947  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:28.582814  817975 main.go:141] libmachine: Using SSH client type: native
	I1218 23:32:28.583244  817975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1218 23:32:28.583259  817975 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1218 23:32:28.845608  817975 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1218 23:32:28.845633  817975 machine.go:91] provisioned docker machine in 1.213453861s
	I1218 23:32:28.845643  817975 client.go:171] LocalClient.Create took 10.395784065s
	I1218 23:32:28.845656  817975 start.go:167] duration metric: libmachine.API.Create for "addons-045387" took 10.395838497s
	I1218 23:32:28.845663  817975 start.go:300] post-start starting for "addons-045387" (driver="docker")
	I1218 23:32:28.845673  817975 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:32:28.845743  817975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:32:28.845799  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:28.864320  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:32:28.971089  817975 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:32:28.975346  817975 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:32:28.975383  817975 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:32:28.975396  817975 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:32:28.975412  817975 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:32:28.975423  817975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1218 23:32:28.975492  817975 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1218 23:32:28.975520  817975 start.go:303] post-start completed in 129.850748ms
	I1218 23:32:28.975826  817975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-045387
	I1218 23:32:28.993706  817975 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/config.json ...
	I1218 23:32:28.993988  817975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:32:28.994042  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:29.014989  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:32:29.113990  817975 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:32:29.119787  817975 start.go:128] duration metric: createHost completed in 10.672336938s
	I1218 23:32:29.119858  817975 start.go:83] releasing machines lock for "addons-045387", held for 10.672568247s
	I1218 23:32:29.119973  817975 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-045387
	I1218 23:32:29.137742  817975 ssh_runner.go:195] Run: cat /version.json
	I1218 23:32:29.137809  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:29.138064  817975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:32:29.138124  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:32:29.156858  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:32:29.172827  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:32:29.264409  817975 ssh_runner.go:195] Run: systemctl --version
	I1218 23:32:29.404158  817975 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1218 23:32:29.553444  817975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:32:29.558943  817975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:32:29.583680  817975 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:32:29.583823  817975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:32:29.630088  817975 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:32:29.630112  817975 start.go:475] detecting cgroup driver to use...
	I1218 23:32:29.630146  817975 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:32:29.630206  817975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 23:32:29.649476  817975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 23:32:29.663014  817975 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:32:29.663084  817975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:32:29.678754  817975 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:32:29.695918  817975 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:32:29.800765  817975 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:32:29.909828  817975 docker.go:219] disabling docker service ...
	I1218 23:32:29.909918  817975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:32:29.930904  817975 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:32:29.945563  817975 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:32:30.090347  817975 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:32:30.207576  817975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:32:30.222429  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:32:30.244615  817975 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1218 23:32:30.244710  817975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:32:30.257643  817975 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1218 23:32:30.257727  817975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:32:30.270184  817975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:32:30.282735  817975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:32:30.295330  817975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:32:30.307169  817975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:32:30.317674  817975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:32:30.328597  817975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:32:30.431407  817975 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1218 23:32:30.552402  817975 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1218 23:32:30.552515  817975 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1218 23:32:30.557575  817975 start.go:543] Will wait 60s for crictl version
	I1218 23:32:30.557641  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:32:30.562060  817975 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:32:30.604962  817975 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1218 23:32:30.605088  817975 ssh_runner.go:195] Run: crio --version
	I1218 23:32:30.651027  817975 ssh_runner.go:195] Run: crio --version
	I1218 23:32:30.696957  817975 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1218 23:32:30.698898  817975 cli_runner.go:164] Run: docker network inspect addons-045387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:32:30.716499  817975 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 23:32:30.721203  817975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:32:30.734841  817975 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:32:30.734917  817975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:32:30.802645  817975 crio.go:496] all images are preloaded for cri-o runtime.
	I1218 23:32:30.802670  817975 crio.go:415] Images already preloaded, skipping extraction
	I1218 23:32:30.802724  817975 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:32:30.844169  817975 crio.go:496] all images are preloaded for cri-o runtime.
	I1218 23:32:30.844194  817975 cache_images.go:84] Images are preloaded, skipping loading
	I1218 23:32:30.844273  817975 ssh_runner.go:195] Run: crio config
	I1218 23:32:30.910605  817975 cni.go:84] Creating CNI manager for ""
	I1218 23:32:30.910628  817975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:32:30.910675  817975 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:32:30.910699  817975 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-045387 NodeName:addons-045387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 23:32:30.910861  817975 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-045387"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:32:30.910959  817975 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-045387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-045387 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:32:30.911035  817975 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 23:32:30.921645  817975 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:32:30.921786  817975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:32:30.932235  817975 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1218 23:32:30.953818  817975 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 23:32:30.975939  817975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1218 23:32:30.997379  817975 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:32:31.002487  817975 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:32:31.017861  817975 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387 for IP: 192.168.49.2
	I1218 23:32:31.017894  817975 certs.go:190] acquiring lock for shared ca certs: {Name:mkb7306ae237ed30250289faa05e9a8d3ae56985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.018776  817975 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key
	I1218 23:32:31.203626  817975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt ...
	I1218 23:32:31.203656  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt: {Name:mk9ef003889385f0e60c93c30e3568bb617e1bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.203851  817975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key ...
	I1218 23:32:31.203864  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key: {Name:mka6216520afd1370d83e32722d89f9ef58b85f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.203966  817975 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key
	I1218 23:32:31.607081  817975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt ...
	I1218 23:32:31.607115  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt: {Name:mk61d8cd0c65d2657f7a70b111be3bfc845303e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.607305  817975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key ...
	I1218 23:32:31.607316  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key: {Name:mke12be01dcc7a640431c25615c98ba17fe40300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.607445  817975 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.key
	I1218 23:32:31.607464  817975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt with IP's: []
	I1218 23:32:31.854438  817975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt ...
	I1218 23:32:31.854470  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: {Name:mk764be888153d899bb19c67e2fa60bd8f6185e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.855336  817975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.key ...
	I1218 23:32:31.855351  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.key: {Name:mk83fda98aedd1c5a6a4622159cc03b7d1040f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:31.855436  817975 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key.dd3b5fb2
	I1218 23:32:31.855456  817975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:32:32.307140  817975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt.dd3b5fb2 ...
	I1218 23:32:32.307169  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt.dd3b5fb2: {Name:mke46b28851c6a4f0cc34fce32b16cb582709811 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:32.307362  817975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key.dd3b5fb2 ...
	I1218 23:32:32.307377  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key.dd3b5fb2: {Name:mk2e31b19dd6dafe5e42508b2d2fa81464b861af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:32.307457  817975 certs.go:337] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt
	I1218 23:32:32.307529  817975 certs.go:341] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key
	I1218 23:32:32.307580  817975 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.key
	I1218 23:32:32.307599  817975 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.crt with IP's: []
	I1218 23:32:32.535748  817975 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.crt ...
	I1218 23:32:32.535782  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.crt: {Name:mkd441dd23d9e96901546f2202169507d3f21487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:32.535984  817975 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.key ...
	I1218 23:32:32.535999  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.key: {Name:mkd91e0d625190d6a22158d5b7bd10d030279f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:32:32.536893  817975 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:32:32.536940  817975 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem (1078 bytes)
	I1218 23:32:32.536965  817975 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:32:32.536996  817975 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem (1679 bytes)
	I1218 23:32:32.537706  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:32:32.569283  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 23:32:32.599100  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:32:32.627307  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1218 23:32:32.655494  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:32:32.683678  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 23:32:32.711889  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:32:32.740958  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 23:32:32.771022  817975 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:32:32.799672  817975 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:32:32.821244  817975 ssh_runner.go:195] Run: openssl version
	I1218 23:32:32.828659  817975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:32:32.840315  817975 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:32:32.844981  817975 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:32:32.845045  817975 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:32:32.853860  817975 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:32:32.865286  817975 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:32:32.869719  817975 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:32:32.869767  817975 kubeadm.go:404] StartCluster: {Name:addons-045387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-045387 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:32:32.869867  817975 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1218 23:32:32.869939  817975 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:32:32.921230  817975 cri.go:89] found id: ""
	I1218 23:32:32.921371  817975 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:32:32.931871  817975 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:32:32.942482  817975 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:32:32.942546  817975 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:32:32.953125  817975 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:32:32.953210  817975 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:32:33.012477  817975 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 23:32:33.012783  817975 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:32:33.064563  817975 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:32:33.064692  817975 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:32:33.064763  817975 kubeadm.go:322] OS: Linux
	I1218 23:32:33.064857  817975 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:32:33.065008  817975 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:32:33.065088  817975 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:32:33.065153  817975 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:32:33.065268  817975 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:32:33.065356  817975 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:32:33.065435  817975 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1218 23:32:33.065510  817975 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1218 23:32:33.065589  817975 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1218 23:32:33.147448  817975 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:32:33.147625  817975 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:32:33.147775  817975 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:32:33.402355  817975 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:32:33.406229  817975 out.go:204]   - Generating certificates and keys ...
	I1218 23:32:33.406321  817975 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:32:33.406390  817975 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:32:33.861344  817975 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:32:34.220823  817975 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:32:34.754160  817975 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:32:35.169099  817975 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:32:35.450249  817975 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:32:35.450386  817975 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-045387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:32:36.011487  817975 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:32:36.011615  817975 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-045387 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:32:36.637244  817975 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:32:37.092995  817975 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:32:37.273877  817975 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:32:37.274154  817975 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:32:37.766492  817975 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:32:37.964271  817975 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:32:38.308962  817975 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:32:38.791154  817975 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:32:38.792091  817975 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:32:38.796516  817975 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:32:38.798866  817975 out.go:204]   - Booting up control plane ...
	I1218 23:32:38.798985  817975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:32:38.799076  817975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:32:38.800139  817975 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:32:38.810961  817975 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:32:38.811995  817975 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:32:38.812256  817975 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:32:38.911805  817975 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:32:46.416165  817975 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504064 seconds
	I1218 23:32:46.416281  817975 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:32:46.438672  817975 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:32:46.970108  817975 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:32:46.970298  817975 kubeadm.go:322] [mark-control-plane] Marking the node addons-045387 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 23:32:47.480834  817975 kubeadm.go:322] [bootstrap-token] Using token: 6ibge9.2qyooowbnt3lw78h
	I1218 23:32:47.482708  817975 out.go:204]   - Configuring RBAC rules ...
	I1218 23:32:47.482833  817975 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:32:47.487657  817975 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:32:47.495423  817975 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:32:47.500666  817975 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:32:47.504502  817975 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:32:47.508050  817975 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:32:47.521471  817975 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:32:47.760174  817975 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:32:47.911171  817975 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:32:47.911196  817975 kubeadm.go:322] 
	I1218 23:32:47.911254  817975 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:32:47.911263  817975 kubeadm.go:322] 
	I1218 23:32:47.911336  817975 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:32:47.911344  817975 kubeadm.go:322] 
	I1218 23:32:47.911368  817975 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:32:47.911427  817975 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:32:47.911478  817975 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:32:47.911486  817975 kubeadm.go:322] 
	I1218 23:32:47.911537  817975 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 23:32:47.911548  817975 kubeadm.go:322] 
	I1218 23:32:47.911595  817975 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 23:32:47.911603  817975 kubeadm.go:322] 
	I1218 23:32:47.911653  817975 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:32:47.911728  817975 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:32:47.911800  817975 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:32:47.911808  817975 kubeadm.go:322] 
	I1218 23:32:47.911887  817975 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:32:47.911983  817975 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:32:47.911992  817975 kubeadm.go:322] 
	I1218 23:32:47.912079  817975 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6ibge9.2qyooowbnt3lw78h \
	I1218 23:32:47.912180  817975 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c \
	I1218 23:32:47.912203  817975 kubeadm.go:322] 	--control-plane 
	I1218 23:32:47.912211  817975 kubeadm.go:322] 
	I1218 23:32:47.912291  817975 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:32:47.912299  817975 kubeadm.go:322] 
	I1218 23:32:47.912376  817975 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6ibge9.2qyooowbnt3lw78h \
	I1218 23:32:47.912483  817975 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c 
	I1218 23:32:47.913670  817975 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:32:47.913787  817975 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:32:47.913902  817975 cni.go:84] Creating CNI manager for ""
	I1218 23:32:47.913929  817975 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:32:47.915825  817975 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:32:47.917379  817975 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:32:47.938136  817975 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 23:32:47.938155  817975 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:32:47.990115  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:32:48.852609  817975 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:32:48.852689  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:48.852738  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=addons-045387 minikube.k8s.io/updated_at=2023_12_18T23_32_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:49.016755  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:49.016818  817975 ops.go:34] apiserver oom_adj: -16
	I1218 23:32:49.517484  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:50.016917  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:50.517711  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:51.017250  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:51.517643  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:52.017178  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:52.516891  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:53.017823  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:53.517081  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:54.017129  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:54.517426  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:55.017658  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:55.517178  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:56.017109  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:56.516910  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:57.017582  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:57.517619  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:58.016975  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:58.517732  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:59.017099  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:32:59.517158  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:33:00.017578  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:33:00.517476  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:33:01.016886  817975 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:33:01.148719  817975 kubeadm.go:1088] duration metric: took 12.296092736s to wait for elevateKubeSystemPrivileges.
	I1218 23:33:01.148751  817975 kubeadm.go:406] StartCluster complete in 28.278984951s
	I1218 23:33:01.148770  817975 settings.go:142] acquiring lock: {Name:mkb4ce0a07455c74d828d76d071a3ad023516aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:33:01.149409  817975 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:33:01.149804  817975 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/kubeconfig: {Name:mk19de5f3e7863c913095f8f2b91ab4519f12535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:33:01.152075  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:33:01.152074  817975 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 23:33:01.152157  817975 addons.go:69] Setting volumesnapshots=true in profile "addons-045387"
	I1218 23:33:01.152172  817975 addons.go:231] Setting addon volumesnapshots=true in "addons-045387"
	I1218 23:33:01.152226  817975 addons.go:69] Setting ingress=true in profile "addons-045387"
	I1218 23:33:01.152230  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.152238  817975 addons.go:231] Setting addon ingress=true in "addons-045387"
	I1218 23:33:01.152279  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.152733  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.152763  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.153403  817975 config.go:182] Loaded profile config "addons-045387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:33:01.153440  817975 addons.go:69] Setting cloud-spanner=true in profile "addons-045387"
	I1218 23:33:01.153452  817975 addons.go:231] Setting addon cloud-spanner=true in "addons-045387"
	I1218 23:33:01.153488  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.153524  817975 addons.go:69] Setting ingress-dns=true in profile "addons-045387"
	I1218 23:33:01.153534  817975 addons.go:231] Setting addon ingress-dns=true in "addons-045387"
	I1218 23:33:01.153564  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.154016  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.156049  817975 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-045387"
	I1218 23:33:01.156214  817975 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-045387"
	I1218 23:33:01.156358  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.156522  817975 addons.go:69] Setting inspektor-gadget=true in profile "addons-045387"
	I1218 23:33:01.156548  817975 addons.go:231] Setting addon inspektor-gadget=true in "addons-045387"
	I1218 23:33:01.156580  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.157016  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.162253  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.163751  817975 addons.go:69] Setting metrics-server=true in profile "addons-045387"
	I1218 23:33:01.163847  817975 addons.go:231] Setting addon metrics-server=true in "addons-045387"
	I1218 23:33:01.163909  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.164531  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.167521  817975 addons.go:69] Setting default-storageclass=true in profile "addons-045387"
	I1218 23:33:01.167560  817975 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-045387"
	I1218 23:33:01.167898  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.175933  817975 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-045387"
	I1218 23:33:01.176039  817975 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-045387"
	I1218 23:33:01.176113  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.180526  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.183367  817975 addons.go:69] Setting gcp-auth=true in profile "addons-045387"
	I1218 23:33:01.183404  817975 mustload.go:65] Loading cluster: addons-045387
	I1218 23:33:01.183599  817975 config.go:182] Loaded profile config "addons-045387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:33:01.183850  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.200189  817975 addons.go:69] Setting registry=true in profile "addons-045387"
	I1218 23:33:01.200278  817975 addons.go:231] Setting addon registry=true in "addons-045387"
	I1218 23:33:01.200402  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.207892  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.212975  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.218822  817975 addons.go:69] Setting storage-provisioner=true in profile "addons-045387"
	I1218 23:33:01.219093  817975 addons.go:231] Setting addon storage-provisioner=true in "addons-045387"
	I1218 23:33:01.221696  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.218981  817975 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-045387"
	I1218 23:33:01.256940  817975 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-045387"
	I1218 23:33:01.257309  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.256918  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.320743  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 23:33:01.328712  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 23:33:01.328801  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 23:33:01.328924  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.373950  817975 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 23:33:01.377101  817975 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:33:01.381456  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 23:33:01.381559  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.391654  817975 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 23:33:01.396112  817975 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 23:33:01.396137  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 23:33:01.396213  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.438961  817975 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:33:01.440673  817975 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 23:33:01.442169  817975 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:33:01.445896  817975 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:33:01.445918  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 23:33:01.445987  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.470290  817975 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:33:01.471909  817975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:33:01.471929  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:33:01.472058  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.482048  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 23:33:01.483908  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 23:33:01.485733  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 23:33:01.488435  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 23:33:01.488391  817975 addons.go:231] Setting addon default-storageclass=true in "addons-045387"
	I1218 23:33:01.509046  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.509549  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.519849  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 23:33:01.525920  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 23:33:01.529780  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 23:33:01.527380  817975 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 23:33:01.533204  817975 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 23:33:01.535087  817975 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 23:33:01.537183  817975 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 23:33:01.537206  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 23:33:01.537267  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.537605  817975 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-045387"
	I1218 23:33:01.537642  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.538363  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:01.562135  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:01.535318  817975 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:33:01.572071  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 23:33:01.572147  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.535324  817975 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 23:33:01.575671  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 23:33:01.575695  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 23:33:01.575767  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.584632  817975 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 23:33:01.587449  817975 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 23:33:01.587478  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 23:33:01.587547  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.609781  817975 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 23:33:01.611868  817975 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 23:33:01.611889  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 23:33:01.611964  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.611419  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:33:01.624129  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.662451  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.694483  817975 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:33:01.694505  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:33:01.694568  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.720081  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.732526  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.751505  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.757854  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.802759  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.806644  817975 out.go:177]   - Using image docker.io/busybox:stable
	I1218 23:33:01.810288  817975 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 23:33:01.814843  817975 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:33:01.814873  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 23:33:01.814940  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:01.823850  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.832116  817975 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-045387" context rescaled to 1 replicas
	I1218 23:33:01.832152  817975 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:33:01.842627  817975 out.go:177] * Verifying Kubernetes components...
	I1218 23:33:01.844343  817975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:33:01.862577  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.872327  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.887442  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:01.907234  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:02.058963  817975 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 23:33:02.058990  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 23:33:02.091911  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 23:33:02.126609  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 23:33:02.177642  817975 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 23:33:02.177674  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 23:33:02.183243  817975 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 23:33:02.183277  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 23:33:02.283317  817975 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 23:33:02.283344  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 23:33:02.288712  817975 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 23:33:02.288741  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 23:33:02.295889  817975 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:33:02.295915  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 23:33:02.318686  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:33:02.339460  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:33:02.351788  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 23:33:02.351814  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 23:33:02.395199  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 23:33:02.426415  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 23:33:02.439998  817975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 23:33:02.440029  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 23:33:02.450378  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 23:33:02.453908  817975 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 23:33:02.453934  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 23:33:02.467149  817975 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 23:33:02.467176  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 23:33:02.541258  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 23:33:02.541284  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 23:33:02.548605  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 23:33:02.610328  817975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 23:33:02.610355  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 23:33:02.614901  817975 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 23:33:02.614946  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 23:33:02.662791  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 23:33:02.662824  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 23:33:02.764059  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 23:33:02.764085  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 23:33:02.783173  817975 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:33:02.783199  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 23:33:02.808981  817975 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 23:33:02.809006  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 23:33:02.864725  817975 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:33:02.864758  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 23:33:02.996760  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 23:33:03.014012  817975 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 23:33:03.014042  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 23:33:03.110739  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 23:33:03.110772  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 23:33:03.114043  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:33:03.168815  817975 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:33:03.168841  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 23:33:03.267423  817975 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 23:33:03.267449  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 23:33:03.322977  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 23:33:03.384938  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 23:33:03.384968  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 23:33:03.546476  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 23:33:03.546554  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 23:33:03.631549  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 23:33:03.631613  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 23:33:03.725566  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 23:33:03.725637  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 23:33:03.932266  817975 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:33:03.932340  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 23:33:04.129225  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 23:33:04.187054  817975 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.57414218s)
	I1218 23:33:04.187078  817975 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 23:33:04.186999  817975 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.342626508s)
	I1218 23:33:04.187975  817975 node_ready.go:35] waiting up to 6m0s for node "addons-045387" to be "Ready" ...
	I1218 23:33:06.281525  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:07.456664  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.364708978s)
	I1218 23:33:07.456799  817975 addons.go:467] Verifying addon ingress=true in "addons-045387"
	I1218 23:33:07.456867  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.138155364s)
	I1218 23:33:07.456907  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.117423538s)
	I1218 23:33:07.459362  817975 out.go:177] * Verifying ingress addon...
	I1218 23:33:07.456737  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.330102079s)
	I1218 23:33:07.457301  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.062075958s)
	I1218 23:33:07.457332  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.030892184s)
	I1218 23:33:07.457362  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.006952918s)
	I1218 23:33:07.457402  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.908777512s)
	I1218 23:33:07.457453  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.460660559s)
	I1218 23:33:07.457526  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.343454711s)
	I1218 23:33:07.457571  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.134557856s)
	I1218 23:33:07.461842  817975 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 23:33:07.462116  817975 addons.go:467] Verifying addon registry=true in "addons-045387"
	I1218 23:33:07.464154  817975 out.go:177] * Verifying registry addon...
	I1218 23:33:07.462254  817975 addons.go:467] Verifying addon metrics-server=true in "addons-045387"
	W1218 23:33:07.462276  817975 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:33:07.466576  817975 retry.go:31] will retry after 338.718566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 23:33:07.467351  817975 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 23:33:07.484421  817975 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 23:33:07.484454  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:07.523038  817975 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 23:33:07.523062  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1218 23:33:07.533276  817975 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1218 23:33:07.771515  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.642240904s)
	I1218 23:33:07.771557  817975 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-045387"
	I1218 23:33:07.773856  817975 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 23:33:07.776310  817975 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 23:33:07.783617  817975 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 23:33:07.783639  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:07.805743  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 23:33:07.965922  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:07.972443  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:08.302856  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:08.478699  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:08.479899  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:08.720914  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:08.783083  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:08.966674  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:08.973057  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:09.292611  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:09.334440  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.528651256s)
	I1218 23:33:09.467466  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:09.478289  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:09.783827  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:09.805433  817975 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 23:33:09.805522  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:09.836261  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:09.966846  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:09.971976  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:10.025744  817975 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 23:33:10.061713  817975 addons.go:231] Setting addon gcp-auth=true in "addons-045387"
	I1218 23:33:10.061781  817975 host.go:66] Checking if "addons-045387" exists ...
	I1218 23:33:10.062330  817975 cli_runner.go:164] Run: docker container inspect addons-045387 --format={{.State.Status}}
	I1218 23:33:10.091416  817975 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 23:33:10.091504  817975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-045387
	I1218 23:33:10.137786  817975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/addons-045387/id_rsa Username:docker}
	I1218 23:33:10.281471  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:10.319553  817975 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 23:33:10.322233  817975 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 23:33:10.324849  817975 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 23:33:10.324873  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 23:33:10.362463  817975 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 23:33:10.362489  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 23:33:10.418700  817975 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:33:10.418725  817975 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 23:33:10.466068  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:10.471911  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:10.489527  817975 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 23:33:10.781181  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:10.966906  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:10.972428  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:11.192432  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:11.286160  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:11.488343  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:11.488869  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:11.520137  817975 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.030572007s)
	I1218 23:33:11.523182  817975 addons.go:467] Verifying addon gcp-auth=true in "addons-045387"
	I1218 23:33:11.527764  817975 out.go:177] * Verifying gcp-auth addon...
	I1218 23:33:11.531022  817975 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 23:33:11.542857  817975 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 23:33:11.542902  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:11.781829  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:11.985196  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:11.995663  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:12.046130  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:12.282187  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:12.467376  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:12.471805  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:12.535632  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:12.781976  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:12.967695  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:12.973156  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:13.037818  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:13.192619  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:13.281887  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:13.471690  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:13.479381  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:13.536183  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:13.781768  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:13.967134  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:13.972522  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:14.035203  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:14.281643  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:14.465693  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:14.471754  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:14.535208  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:14.781099  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:14.967043  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:14.971971  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:15.035653  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:15.282057  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:15.466630  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:15.471011  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:15.535315  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:15.693416  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:15.781710  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:15.966120  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:15.971807  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:16.037259  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:16.282413  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:16.466679  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:16.471411  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:16.534613  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:16.781415  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:16.966574  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:16.971321  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:17.034929  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:17.280909  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:17.466719  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:17.472704  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:17.535061  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:17.781073  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:17.966523  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:17.971789  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:18.035189  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:18.191721  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:18.280872  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:18.466297  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:18.471295  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:18.534901  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:18.781187  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:18.966864  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:18.971601  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:19.034756  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:19.281673  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:19.466020  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:19.472145  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:19.535664  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:19.780585  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:19.967189  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:19.972174  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:20.035730  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:20.192660  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:20.280980  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:20.466833  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:20.471368  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:20.534983  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:20.781816  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:20.966349  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:20.971357  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:21.035220  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:21.280577  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:21.466922  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:21.471530  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:21.534873  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:21.781425  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:21.966983  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:21.971582  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:22.035620  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:22.281140  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:22.466743  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:22.471861  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:22.535254  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:22.691678  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:22.782271  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:22.966224  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:22.971366  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:23.035055  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:23.281793  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:23.466279  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:23.470968  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:23.534811  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:23.781299  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:23.966399  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:23.970990  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:24.035673  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:24.291551  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:24.467006  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:24.471730  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:24.535186  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:24.780901  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:24.966322  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:24.971123  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:25.035579  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:25.191868  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:25.280635  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:25.466261  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:25.471136  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:25.534970  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:25.781417  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:25.966147  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:25.972041  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:26.035390  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:26.282177  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:26.466260  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:26.471207  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:26.535385  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:26.780732  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:26.966049  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:26.971977  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:27.035573  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:27.285220  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:27.465741  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:27.471701  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:27.534773  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:27.692149  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:27.781705  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:27.965772  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:27.971640  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:28.035881  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:28.281359  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:28.466562  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:28.472436  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:28.535840  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:28.781326  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:28.967316  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:28.972181  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:29.035813  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:29.281050  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:29.466967  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:29.473782  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:29.535838  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:29.782256  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:29.966057  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:29.971890  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:30.036175  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:30.192595  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:30.281512  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:30.473715  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:30.475918  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:30.535497  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:30.781378  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:30.965836  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:30.972359  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:31.034717  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:31.281652  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:31.466357  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:31.471052  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:31.534614  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:31.781556  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:31.966615  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:31.973016  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:32.035490  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:32.291124  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:32.467140  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:32.476175  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:32.535125  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:32.691933  817975 node_ready.go:58] node "addons-045387" has status "Ready":"False"
	I1218 23:33:32.781890  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:32.966212  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:32.972664  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:33.036844  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:33.284982  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:33.466761  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:33.471835  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:33.535228  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:33.780578  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:33.966298  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:33.971298  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:34.035167  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:34.284541  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:34.467226  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:34.471938  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:34.535160  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:34.791312  817975 node_ready.go:49] node "addons-045387" has status "Ready":"True"
	I1218 23:33:34.791341  817975 node_ready.go:38] duration metric: took 30.603342122s waiting for node "addons-045387" to be "Ready" ...
	I1218 23:33:34.791352  817975 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:33:34.812168  817975 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 23:33:34.812195  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:34.823666  817975 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cjc5m" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:34.968917  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:34.974290  817975 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 23:33:34.974318  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:35.181604  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:35.291155  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:35.511641  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:35.528448  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:35.629490  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:35.782142  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:35.967985  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:35.973384  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:36.036238  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:36.286168  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:36.466608  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:36.471895  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:36.539212  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:36.781941  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:36.830822  817975 pod_ready.go:92] pod "coredns-5dd5756b68-cjc5m" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:36.830848  817975 pod_ready.go:81] duration metric: took 2.007151982s waiting for pod "coredns-5dd5756b68-cjc5m" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.830871  817975 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.837863  817975 pod_ready.go:92] pod "etcd-addons-045387" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:36.837888  817975 pod_ready.go:81] duration metric: took 7.009638ms waiting for pod "etcd-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.837903  817975 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.844649  817975 pod_ready.go:92] pod "kube-apiserver-addons-045387" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:36.844676  817975 pod_ready.go:81] duration metric: took 6.764004ms waiting for pod "kube-apiserver-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.844688  817975 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.852214  817975 pod_ready.go:92] pod "kube-controller-manager-addons-045387" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:36.852237  817975 pod_ready.go:81] duration metric: took 7.540694ms waiting for pod "kube-controller-manager-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.852251  817975 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7ltl6" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.858760  817975 pod_ready.go:92] pod "kube-proxy-7ltl6" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:36.858786  817975 pod_ready.go:81] duration metric: took 6.526894ms waiting for pod "kube-proxy-7ltl6" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.858797  817975 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:36.966542  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:36.980077  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:37.051473  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:37.228106  817975 pod_ready.go:92] pod "kube-scheduler-addons-045387" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:37.228130  817975 pod_ready.go:81] duration metric: took 369.324124ms waiting for pod "kube-scheduler-addons-045387" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:37.228151  817975 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-q4g7r" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:37.284670  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:37.467032  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:37.473185  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:37.535484  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:37.782368  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:37.967176  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:37.973425  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:38.042537  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:38.285975  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:38.466999  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:38.472906  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:38.537053  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:38.784246  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:38.967746  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:38.972632  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:39.036189  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:39.239134  817975 pod_ready.go:102] pod "metrics-server-7c66d45ddc-q4g7r" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:39.283655  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:39.467049  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:39.472954  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:39.560422  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:39.786243  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:39.967198  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:39.980573  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:40.037338  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:40.283773  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:40.466660  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:40.474941  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:40.559648  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:40.739264  817975 pod_ready.go:92] pod "metrics-server-7c66d45ddc-q4g7r" in "kube-system" namespace has status "Ready":"True"
	I1218 23:33:40.739288  817975 pod_ready.go:81] duration metric: took 3.511129032s waiting for pod "metrics-server-7c66d45ddc-q4g7r" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:40.739299  817975 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace to be "Ready" ...
	I1218 23:33:40.785132  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:40.968554  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:40.974502  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:41.037100  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:41.284446  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:41.481939  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:41.483032  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:41.538734  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:41.788784  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:41.974376  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:41.980737  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:42.035721  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:42.285468  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:42.466642  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:42.473346  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:42.535350  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:42.746277  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:42.782357  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:42.967161  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:42.972787  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:43.035529  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:43.286574  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:43.468908  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:43.473746  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:43.537699  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:43.782752  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:43.966522  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:43.972613  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:44.035515  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:44.283758  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:44.467078  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:44.474317  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:44.535276  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:44.746478  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:44.790891  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:44.966794  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:44.973091  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:45.044584  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:45.307437  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:45.468625  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:45.475576  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:45.535708  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:45.784346  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:45.967349  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:45.972450  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:46.038841  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:46.301861  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:46.469188  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:46.474352  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:46.535753  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:46.748058  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:46.783445  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:46.971368  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:46.978169  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:47.040554  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:47.282474  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:47.468200  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:47.476893  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:47.537064  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:47.786596  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:47.969491  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:47.977190  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:48.049288  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:48.285751  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:48.466799  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:48.472911  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:48.535900  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:48.782717  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:48.966193  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:48.975211  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:49.040536  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:49.246642  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:49.301066  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:49.467208  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:49.472858  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:49.535863  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:49.789387  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:49.968278  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:49.973361  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:50.035750  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:50.282987  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:50.473750  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:50.474298  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:50.535371  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:50.783672  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:50.969152  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:50.975274  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:51.042293  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:51.248606  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:51.307468  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:51.490076  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:51.491269  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:51.535408  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:51.788033  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:51.966601  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:51.974931  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:52.035266  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:52.282827  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:52.470266  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:52.479457  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:52.535243  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:52.782636  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:52.967207  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:52.972825  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:53.036238  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:53.283402  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:53.473517  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:53.491409  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:53.535091  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:53.748821  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:53.784030  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:53.967285  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:53.973944  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:54.036217  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:54.282693  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:54.470275  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:54.477192  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:54.548014  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:54.782053  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:54.966742  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:54.972750  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:55.034993  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:55.282612  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:55.470923  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:55.477605  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:55.571464  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:55.782762  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:55.986587  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:55.989544  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:56.035893  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:56.246253  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:56.282787  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:56.467408  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:56.472217  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:56.537576  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:56.782519  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:56.966990  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:56.972316  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:57.035413  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:57.282346  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:57.466655  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:57.472050  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:57.535142  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:57.782933  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:57.970578  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:57.979742  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:58.035378  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:58.248348  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:33:58.283124  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:58.466208  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:58.474212  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:58.536247  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:58.782682  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:58.967094  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:58.972640  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:59.035919  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:59.283096  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:59.466179  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:59.472699  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:33:59.535492  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:33:59.782504  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:33:59.969000  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:33:59.977962  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:00.069403  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:00.276565  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:34:00.284730  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:00.470327  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:00.474172  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:00.537499  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:00.784110  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:00.967076  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:00.973678  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:01.034909  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:01.282476  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:01.467219  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:01.472804  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:01.552669  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:01.782488  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:01.966970  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:01.980247  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:02.036252  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:02.283342  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:02.466851  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:02.472353  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:02.544146  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:02.745946  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:34:02.781926  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:02.966375  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:02.971678  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:03.035437  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:03.282050  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:03.466266  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:03.473103  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:03.540234  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:03.782433  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:03.966511  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:03.971841  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:04.034990  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:04.283095  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:04.467915  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:04.472709  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:04.535107  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:04.747138  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:34:04.783133  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:04.976704  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:04.992507  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:05.036559  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:05.283258  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:05.467195  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:05.479991  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:05.536231  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:05.791766  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:05.968983  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:05.982527  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:06.037661  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:06.283821  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:06.467550  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:06.473129  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:06.535970  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:06.782759  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:06.966867  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:06.973691  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:07.035658  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:07.246179  817975 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"False"
	I1218 23:34:07.286577  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:07.468912  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:07.473707  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:07.543030  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:07.784841  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:07.967816  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:07.972407  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:08.036235  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:08.246209  817975 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace has status "Ready":"True"
	I1218 23:34:08.246274  817975 pod_ready.go:81] duration metric: took 27.506964727s waiting for pod "nvidia-device-plugin-daemonset-8964k" in "kube-system" namespace to be "Ready" ...
	I1218 23:34:08.246302  817975 pod_ready.go:38] duration metric: took 33.454937948s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:34:08.246320  817975 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:34:08.246351  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1218 23:34:08.246412  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 23:34:08.287314  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:08.301522  817975 cri.go:89] found id: "ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:08.301544  817975 cri.go:89] found id: ""
	I1218 23:34:08.301551  817975 logs.go:284] 1 containers: [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54]
	I1218 23:34:08.301607  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.306128  817975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1218 23:34:08.306211  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 23:34:08.353682  817975 cri.go:89] found id: "77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:08.353706  817975 cri.go:89] found id: ""
	I1218 23:34:08.353714  817975 logs.go:284] 1 containers: [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319]
	I1218 23:34:08.353767  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.359098  817975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1218 23:34:08.359165  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 23:34:08.413792  817975 cri.go:89] found id: "c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:08.413815  817975 cri.go:89] found id: ""
	I1218 23:34:08.413822  817975 logs.go:284] 1 containers: [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974]
	I1218 23:34:08.413899  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.418505  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1218 23:34:08.418591  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 23:34:08.465432  817975 cri.go:89] found id: "ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:08.465457  817975 cri.go:89] found id: ""
	I1218 23:34:08.465465  817975 logs.go:284] 1 containers: [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab]
	I1218 23:34:08.465533  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.470071  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:08.471282  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1218 23:34:08.471366  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 23:34:08.477709  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:08.523078  817975 cri.go:89] found id: "7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:08.523103  817975 cri.go:89] found id: ""
	I1218 23:34:08.523111  817975 logs.go:284] 1 containers: [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd]
	I1218 23:34:08.523170  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.527856  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 23:34:08.527930  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 23:34:08.535939  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:08.574245  817975 cri.go:89] found id: "5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:08.574271  817975 cri.go:89] found id: ""
	I1218 23:34:08.574279  817975 logs.go:284] 1 containers: [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f]
	I1218 23:34:08.574336  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.579601  817975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1218 23:34:08.579676  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 23:34:08.623841  817975 cri.go:89] found id: "2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:08.623867  817975 cri.go:89] found id: ""
	I1218 23:34:08.623875  817975 logs.go:284] 1 containers: [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5]
	I1218 23:34:08.623930  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:08.628896  817975 logs.go:123] Gathering logs for kubelet ...
	I1218 23:34:08.628921  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1218 23:34:08.657565  817975 logs.go:138] Found kubelet problem: Dec 18 23:33:01 addons-045387 kubelet[1354]: W1218 23:33:01.282209    1354 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-045387" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-045387' and this object
	W1218 23:34:08.657826  817975 logs.go:138] Found kubelet problem: Dec 18 23:33:01 addons-045387 kubelet[1354]: E1218 23:33:01.282250    1354 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-045387" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-045387' and this object
	I1218 23:34:08.720246  817975 logs.go:123] Gathering logs for dmesg ...
	I1218 23:34:08.720284  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 23:34:08.745316  817975 logs.go:123] Gathering logs for kube-apiserver [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54] ...
	I1218 23:34:08.745346  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:08.782408  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:08.870299  817975 logs.go:123] Gathering logs for coredns [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974] ...
	I1218 23:34:08.870344  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:08.960619  817975 logs.go:123] Gathering logs for kube-scheduler [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab] ...
	I1218 23:34:08.960656  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:08.967426  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:08.972679  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:09.026710  817975 logs.go:123] Gathering logs for kube-controller-manager [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f] ...
	I1218 23:34:09.026742  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:09.037382  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:09.131455  817975 logs.go:123] Gathering logs for kindnet [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5] ...
	I1218 23:34:09.131508  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:09.187834  817975 logs.go:123] Gathering logs for describe nodes ...
	I1218 23:34:09.187865  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1218 23:34:09.283667  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:09.404423  817975 logs.go:123] Gathering logs for etcd [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319] ...
	I1218 23:34:09.404456  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:09.477775  817975 logs.go:123] Gathering logs for kube-proxy [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd] ...
	I1218 23:34:09.477856  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:09.482161  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:09.483366  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:09.535716  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:09.540181  817975 logs.go:123] Gathering logs for CRI-O ...
	I1218 23:34:09.540211  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1218 23:34:09.662036  817975 logs.go:123] Gathering logs for container status ...
	I1218 23:34:09.662075  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 23:34:09.762854  817975 out.go:309] Setting ErrFile to fd 2...
	I1218 23:34:09.762884  817975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1218 23:34:09.762936  817975 out.go:239] X Problems detected in kubelet:
	W1218 23:34:09.762951  817975 out.go:239]   Dec 18 23:33:01 addons-045387 kubelet[1354]: W1218 23:33:01.282209    1354 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-045387" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-045387' and this object
	W1218 23:34:09.762969  817975 out.go:239]   Dec 18 23:33:01 addons-045387 kubelet[1354]: E1218 23:33:01.282250    1354 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-045387" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-045387' and this object
	I1218 23:34:09.762983  817975 out.go:309] Setting ErrFile to fd 2...
	I1218 23:34:09.762989  817975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:34:09.783282  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:09.966696  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:09.973537  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:10.036363  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:10.282853  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:10.466478  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:10.471984  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:10.534693  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:10.782005  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:10.967532  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:10.976231  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:11.035117  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:11.283323  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:11.466942  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:11.473335  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:11.537859  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:11.784666  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:11.967172  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:11.973683  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:12.036098  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:12.283179  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:12.467104  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:12.473375  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:12.535461  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:12.788114  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:12.967027  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:12.973700  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:13.037466  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:13.283055  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:13.467464  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:13.473584  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:13.535864  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:13.783763  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:13.967054  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:14.009718  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:14.050448  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:14.282619  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:14.467241  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:14.473765  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 23:34:14.538236  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:14.783317  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:14.986634  817975 kapi.go:107] duration metric: took 1m7.519279329s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 23:34:14.986727  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:15.040528  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:15.284854  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:15.467150  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:15.534680  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:15.782074  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:15.971284  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:16.035167  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:16.283445  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:16.467597  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:16.536180  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:16.782528  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:16.973881  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:17.035685  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:17.287930  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:17.466532  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:17.535056  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 23:34:17.782745  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:17.969264  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:18.038211  817975 kapi.go:107] duration metric: took 1m6.507173064s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 23:34:18.040087  817975 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-045387 cluster.
	I1218 23:34:18.042207  817975 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 23:34:18.043809  817975 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 23:34:18.282222  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:18.466458  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:18.782689  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:18.966448  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:19.284683  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:19.466278  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:19.765048  817975 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:34:19.783242  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:19.802687  817975 api_server.go:72] duration metric: took 1m17.970495019s to wait for apiserver process to appear ...
	I1218 23:34:19.802713  817975 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:34:19.802753  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1218 23:34:19.802813  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 23:34:19.968167  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:20.070435  817975 cri.go:89] found id: "ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:20.070461  817975 cri.go:89] found id: ""
	I1218 23:34:20.070469  817975 logs.go:284] 1 containers: [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54]
	I1218 23:34:20.070537  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.085962  817975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1218 23:34:20.086048  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 23:34:20.290813  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:20.302526  817975 cri.go:89] found id: "77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:20.302592  817975 cri.go:89] found id: ""
	I1218 23:34:20.302612  817975 logs.go:284] 1 containers: [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319]
	I1218 23:34:20.302702  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.325699  817975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1218 23:34:20.325813  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 23:34:20.467442  817975 cri.go:89] found id: "c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:20.467513  817975 cri.go:89] found id: ""
	I1218 23:34:20.467534  817975 logs.go:284] 1 containers: [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974]
	I1218 23:34:20.467616  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.470309  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:20.482174  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1218 23:34:20.482283  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 23:34:20.555999  817975 cri.go:89] found id: "ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:20.556063  817975 cri.go:89] found id: ""
	I1218 23:34:20.556077  817975 logs.go:284] 1 containers: [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab]
	I1218 23:34:20.556143  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.562182  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1218 23:34:20.562257  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 23:34:20.628600  817975 cri.go:89] found id: "7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:20.628624  817975 cri.go:89] found id: ""
	I1218 23:34:20.628632  817975 logs.go:284] 1 containers: [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd]
	I1218 23:34:20.628688  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.633531  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 23:34:20.633606  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 23:34:20.702671  817975 cri.go:89] found id: "5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:20.702693  817975 cri.go:89] found id: ""
	I1218 23:34:20.702701  817975 logs.go:284] 1 containers: [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f]
	I1218 23:34:20.702756  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.707826  817975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1218 23:34:20.707894  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 23:34:20.775107  817975 cri.go:89] found id: "2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:20.775132  817975 cri.go:89] found id: ""
	I1218 23:34:20.775142  817975 logs.go:284] 1 containers: [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5]
	I1218 23:34:20.775198  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:20.783643  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:20.788776  817975 logs.go:123] Gathering logs for container status ...
	I1218 23:34:20.788837  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 23:34:20.874051  817975 logs.go:123] Gathering logs for kubelet ...
	I1218 23:34:20.874118  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 23:34:20.967112  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:20.975395  817975 logs.go:123] Gathering logs for describe nodes ...
	I1218 23:34:20.975464  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1218 23:34:21.215297  817975 logs.go:123] Gathering logs for etcd [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319] ...
	I1218 23:34:21.215377  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:21.283395  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:21.303748  817975 logs.go:123] Gathering logs for coredns [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974] ...
	I1218 23:34:21.303820  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:21.369846  817975 logs.go:123] Gathering logs for kube-controller-manager [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f] ...
	I1218 23:34:21.369925  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:21.467474  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:21.512856  817975 logs.go:123] Gathering logs for kindnet [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5] ...
	I1218 23:34:21.512933  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:21.573505  817975 logs.go:123] Gathering logs for CRI-O ...
	I1218 23:34:21.573534  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1218 23:34:21.677794  817975 logs.go:123] Gathering logs for dmesg ...
	I1218 23:34:21.678065  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 23:34:21.710235  817975 logs.go:123] Gathering logs for kube-apiserver [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54] ...
	I1218 23:34:21.710264  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:21.783109  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:21.830350  817975 logs.go:123] Gathering logs for kube-scheduler [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab] ...
	I1218 23:34:21.830426  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:21.904650  817975 logs.go:123] Gathering logs for kube-proxy [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd] ...
	I1218 23:34:21.904677  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:21.969472  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:22.286204  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:22.471303  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:22.782958  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:22.966827  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:23.286243  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:23.466873  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:23.782868  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:23.966404  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:24.282925  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:24.457370  817975 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 23:34:24.466430  817975 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 23:34:24.466973  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:24.468239  817975 api_server.go:141] control plane version: v1.28.4
	I1218 23:34:24.468289  817975 api_server.go:131] duration metric: took 4.665567604s to wait for apiserver health ...
	I1218 23:34:24.468326  817975 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:34:24.468366  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1218 23:34:24.468456  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1218 23:34:24.537433  817975 cri.go:89] found id: "ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:24.537495  817975 cri.go:89] found id: ""
	I1218 23:34:24.537515  817975 logs.go:284] 1 containers: [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54]
	I1218 23:34:24.537607  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.542824  817975 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1218 23:34:24.542935  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1218 23:34:24.623688  817975 cri.go:89] found id: "77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:24.623749  817975 cri.go:89] found id: ""
	I1218 23:34:24.623783  817975 logs.go:284] 1 containers: [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319]
	I1218 23:34:24.623863  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.630336  817975 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1218 23:34:24.630442  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1218 23:34:24.714738  817975 cri.go:89] found id: "c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:24.714801  817975 cri.go:89] found id: ""
	I1218 23:34:24.714821  817975 logs.go:284] 1 containers: [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974]
	I1218 23:34:24.714905  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.719519  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1218 23:34:24.719625  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1218 23:34:24.785935  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:24.786143  817975 cri.go:89] found id: "ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:24.786193  817975 cri.go:89] found id: ""
	I1218 23:34:24.786214  817975 logs.go:284] 1 containers: [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab]
	I1218 23:34:24.786292  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.790963  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1218 23:34:24.791031  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1218 23:34:24.856571  817975 cri.go:89] found id: "7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:24.856633  817975 cri.go:89] found id: ""
	I1218 23:34:24.856652  817975 logs.go:284] 1 containers: [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd]
	I1218 23:34:24.856715  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.861653  817975 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1218 23:34:24.861775  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1218 23:34:24.925752  817975 cri.go:89] found id: "5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:24.925821  817975 cri.go:89] found id: ""
	I1218 23:34:24.925842  817975 logs.go:284] 1 containers: [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f]
	I1218 23:34:24.925934  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.930783  817975 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1218 23:34:24.930893  817975 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1218 23:34:24.968615  817975 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 23:34:24.990589  817975 cri.go:89] found id: "2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:24.990660  817975 cri.go:89] found id: ""
	I1218 23:34:24.990682  817975 logs.go:284] 1 containers: [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5]
	I1218 23:34:24.990769  817975 ssh_runner.go:195] Run: which crictl
	I1218 23:34:24.996762  817975 logs.go:123] Gathering logs for kubelet ...
	I1218 23:34:24.996823  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1218 23:34:25.124708  817975 logs.go:123] Gathering logs for kube-scheduler [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab] ...
	I1218 23:34:25.124798  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab"
	I1218 23:34:25.199462  817975 logs.go:123] Gathering logs for kube-proxy [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd] ...
	I1218 23:34:25.199487  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd"
	I1218 23:34:25.258145  817975 logs.go:123] Gathering logs for CRI-O ...
	I1218 23:34:25.258172  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1218 23:34:25.287478  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:25.361373  817975 logs.go:123] Gathering logs for kindnet [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5] ...
	I1218 23:34:25.361412  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5"
	I1218 23:34:25.406141  817975 logs.go:123] Gathering logs for container status ...
	I1218 23:34:25.406170  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1218 23:34:25.468974  817975 logs.go:123] Gathering logs for dmesg ...
	I1218 23:34:25.469003  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1218 23:34:25.471227  817975 kapi.go:107] duration metric: took 1m18.009382308s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 23:34:25.492663  817975 logs.go:123] Gathering logs for describe nodes ...
	I1218 23:34:25.492745  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1218 23:34:25.651615  817975 logs.go:123] Gathering logs for kube-apiserver [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54] ...
	I1218 23:34:25.651729  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54"
	I1218 23:34:25.776974  817975 logs.go:123] Gathering logs for etcd [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319] ...
	I1218 23:34:25.777312  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319"
	I1218 23:34:25.782994  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:25.902316  817975 logs.go:123] Gathering logs for coredns [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974] ...
	I1218 23:34:25.902390  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974"
	I1218 23:34:26.035745  817975 logs.go:123] Gathering logs for kube-controller-manager [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f] ...
	I1218 23:34:26.035903  817975 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f"
	I1218 23:34:26.285124  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:26.782453  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:27.282836  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:27.782893  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:28.283234  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:28.699131  817975 system_pods.go:59] 18 kube-system pods found
	I1218 23:34:28.699174  817975 system_pods.go:61] "coredns-5dd5756b68-cjc5m" [3968fc26-4598-4bd1-abf2-4b815b0cfc84] Running
	I1218 23:34:28.699182  817975 system_pods.go:61] "csi-hostpath-attacher-0" [2bf18d75-103d-47a1-b549-d30a29e5f370] Running
	I1218 23:34:28.699187  817975 system_pods.go:61] "csi-hostpath-resizer-0" [642be40f-ecc1-4d3a-bb2f-4b0c1ad00bde] Running
	I1218 23:34:28.699197  817975 system_pods.go:61] "csi-hostpathplugin-hb475" [b788fbd6-bc1c-4c53-a9ce-8baf073eef8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:34:28.699207  817975 system_pods.go:61] "etcd-addons-045387" [db772cc5-92e2-43e3-90e1-306f672590dc] Running
	I1218 23:34:28.699220  817975 system_pods.go:61] "kindnet-jtgls" [30d25044-a0be-423d-b37f-7c7a028fcb53] Running
	I1218 23:34:28.699230  817975 system_pods.go:61] "kube-apiserver-addons-045387" [bb895e7b-4b9e-416f-8aac-34b70cab832e] Running
	I1218 23:34:28.699236  817975 system_pods.go:61] "kube-controller-manager-addons-045387" [c350a8e6-dff4-4ef0-99da-036fe8ee1093] Running
	I1218 23:34:28.699245  817975 system_pods.go:61] "kube-ingress-dns-minikube" [14c2a57f-3041-48ad-aa97-442729b59c40] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:34:28.699255  817975 system_pods.go:61] "kube-proxy-7ltl6" [350dc5e2-97f2-42f1-afa0-4c471b768756] Running
	I1218 23:34:28.699260  817975 system_pods.go:61] "kube-scheduler-addons-045387" [4f64f0ba-9c97-4b21-a332-b02a5463d292] Running
	I1218 23:34:28.699271  817975 system_pods.go:61] "metrics-server-7c66d45ddc-q4g7r" [708e46c7-d95c-4afb-b32e-fbd121bb9051] Running
	I1218 23:34:28.699276  817975 system_pods.go:61] "nvidia-device-plugin-daemonset-8964k" [17d014f3-90d2-4166-8677-f54ffc3a0687] Running
	I1218 23:34:28.699282  817975 system_pods.go:61] "registry-proxy-2qgkd" [f67d4658-db21-4c23-ac2d-b54c5dd26372] Running
	I1218 23:34:28.699293  817975 system_pods.go:61] "registry-s859s" [c35fa1d5-ca7d-47a9-a8fc-3283888ffb9f] Running
	I1218 23:34:28.699298  817975 system_pods.go:61] "snapshot-controller-58dbcc7b99-5xnpk" [9f81619f-37f9-4ad9-82d4-06ce5f366e51] Running
	I1218 23:34:28.699303  817975 system_pods.go:61] "snapshot-controller-58dbcc7b99-xbm92" [88a8ab9a-49e1-4385-ad42-a9cdc265d089] Running
	I1218 23:34:28.699308  817975 system_pods.go:61] "storage-provisioner" [3aea6c68-d1a9-4381-b504-5f9383c4fc2b] Running
	I1218 23:34:28.699316  817975 system_pods.go:74] duration metric: took 4.230966371s to wait for pod list to return data ...
	I1218 23:34:28.699327  817975 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:34:28.702225  817975 default_sa.go:45] found service account: "default"
	I1218 23:34:28.702252  817975 default_sa.go:55] duration metric: took 2.918243ms for default service account to be created ...
	I1218 23:34:28.702267  817975 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:34:28.712344  817975 system_pods.go:86] 18 kube-system pods found
	I1218 23:34:28.712378  817975 system_pods.go:89] "coredns-5dd5756b68-cjc5m" [3968fc26-4598-4bd1-abf2-4b815b0cfc84] Running
	I1218 23:34:28.712385  817975 system_pods.go:89] "csi-hostpath-attacher-0" [2bf18d75-103d-47a1-b549-d30a29e5f370] Running
	I1218 23:34:28.712391  817975 system_pods.go:89] "csi-hostpath-resizer-0" [642be40f-ecc1-4d3a-bb2f-4b0c1ad00bde] Running
	I1218 23:34:28.712409  817975 system_pods.go:89] "csi-hostpathplugin-hb475" [b788fbd6-bc1c-4c53-a9ce-8baf073eef8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 23:34:28.712418  817975 system_pods.go:89] "etcd-addons-045387" [db772cc5-92e2-43e3-90e1-306f672590dc] Running
	I1218 23:34:28.712425  817975 system_pods.go:89] "kindnet-jtgls" [30d25044-a0be-423d-b37f-7c7a028fcb53] Running
	I1218 23:34:28.712430  817975 system_pods.go:89] "kube-apiserver-addons-045387" [bb895e7b-4b9e-416f-8aac-34b70cab832e] Running
	I1218 23:34:28.712441  817975 system_pods.go:89] "kube-controller-manager-addons-045387" [c350a8e6-dff4-4ef0-99da-036fe8ee1093] Running
	I1218 23:34:28.712449  817975 system_pods.go:89] "kube-ingress-dns-minikube" [14c2a57f-3041-48ad-aa97-442729b59c40] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 23:34:28.712459  817975 system_pods.go:89] "kube-proxy-7ltl6" [350dc5e2-97f2-42f1-afa0-4c471b768756] Running
	I1218 23:34:28.712465  817975 system_pods.go:89] "kube-scheduler-addons-045387" [4f64f0ba-9c97-4b21-a332-b02a5463d292] Running
	I1218 23:34:28.712474  817975 system_pods.go:89] "metrics-server-7c66d45ddc-q4g7r" [708e46c7-d95c-4afb-b32e-fbd121bb9051] Running
	I1218 23:34:28.712485  817975 system_pods.go:89] "nvidia-device-plugin-daemonset-8964k" [17d014f3-90d2-4166-8677-f54ffc3a0687] Running
	I1218 23:34:28.712490  817975 system_pods.go:89] "registry-proxy-2qgkd" [f67d4658-db21-4c23-ac2d-b54c5dd26372] Running
	I1218 23:34:28.712495  817975 system_pods.go:89] "registry-s859s" [c35fa1d5-ca7d-47a9-a8fc-3283888ffb9f] Running
	I1218 23:34:28.712499  817975 system_pods.go:89] "snapshot-controller-58dbcc7b99-5xnpk" [9f81619f-37f9-4ad9-82d4-06ce5f366e51] Running
	I1218 23:34:28.712511  817975 system_pods.go:89] "snapshot-controller-58dbcc7b99-xbm92" [88a8ab9a-49e1-4385-ad42-a9cdc265d089] Running
	I1218 23:34:28.712516  817975 system_pods.go:89] "storage-provisioner" [3aea6c68-d1a9-4381-b504-5f9383c4fc2b] Running
	I1218 23:34:28.712526  817975 system_pods.go:126] duration metric: took 10.252383ms to wait for k8s-apps to be running ...
	I1218 23:34:28.712539  817975 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:34:28.712599  817975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:34:28.738658  817975 system_svc.go:56] duration metric: took 26.109962ms WaitForService to wait for kubelet.
	I1218 23:34:28.738685  817975 kubeadm.go:581] duration metric: took 1m26.906510065s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:34:28.738705  817975 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:34:28.742274  817975 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:34:28.742308  817975 node_conditions.go:123] node cpu capacity is 2
	I1218 23:34:28.742320  817975 node_conditions.go:105] duration metric: took 3.610119ms to run NodePressure ...
	I1218 23:34:28.742332  817975 start.go:228] waiting for startup goroutines ...
	I1218 23:34:28.782236  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:29.282805  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:29.782503  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:30.283618  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:30.781765  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:31.282491  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:31.781778  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:32.290681  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:32.782659  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:33.282345  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:33.782619  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:34.285543  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:34.783404  817975 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 23:34:35.281896  817975 kapi.go:107] duration metric: took 1m27.505585443s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 23:34:35.284460  817975 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1218 23:34:35.287083  817975 addons.go:502] enable addons completed in 1m34.134999955s: enabled=[storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1218 23:34:35.287141  817975 start.go:233] waiting for cluster config update ...
	I1218 23:34:35.287162  817975 start.go:242] writing updated cluster config ...
	I1218 23:34:35.287477  817975 ssh_runner.go:195] Run: rm -f paused
	I1218 23:34:35.658644  817975 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 23:34:35.667622  817975 out.go:177] * Done! kubectl is now configured to use "addons-045387" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.983701369Z" level=info msg="Removed pod sandbox: 9dc89af786ae53f865fc079381e76d6db1c42e2338c34fa350b4c4bbf31e5c5d" id=d2a0732a-d8f3-4724-a68b-e47bb7144160 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.984410720Z" level=info msg="Stopping pod sandbox: 3d2e53a28f4483745a3475e28ebc3eeb36f15ca8ced41fbae13ae59f06eb7937" id=474c3dd6-0e7f-4458-9aaa-914e0b834be0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.984450769Z" level=info msg="Stopped pod sandbox (already stopped): 3d2e53a28f4483745a3475e28ebc3eeb36f15ca8ced41fbae13ae59f06eb7937" id=474c3dd6-0e7f-4458-9aaa-914e0b834be0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.984790844Z" level=info msg="Removing pod sandbox: 3d2e53a28f4483745a3475e28ebc3eeb36f15ca8ced41fbae13ae59f06eb7937" id=f4f2e191-813f-4646-857c-204d8d01e792 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.993039740Z" level=info msg="Removed pod sandbox: 3d2e53a28f4483745a3475e28ebc3eeb36f15ca8ced41fbae13ae59f06eb7937" id=f4f2e191-813f-4646-857c-204d8d01e792 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.993651780Z" level=info msg="Stopping pod sandbox: 8641dd49123ebb437f63be13c5322edf29d191ec6cda554f67d821f6c65608e1" id=78e91bc5-6b4c-43dd-a500-c11c907b9341 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.993694922Z" level=info msg="Stopped pod sandbox (already stopped): 8641dd49123ebb437f63be13c5322edf29d191ec6cda554f67d821f6c65608e1" id=78e91bc5-6b4c-43dd-a500-c11c907b9341 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:50 addons-045387 crio[891]: time="2023-12-18 23:37:50.994089274Z" level=info msg="Removing pod sandbox: 8641dd49123ebb437f63be13c5322edf29d191ec6cda554f67d821f6c65608e1" id=3a11059c-ddc5-40e7-bc3a-01c937f70320 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.004996297Z" level=info msg="Removed pod sandbox: 8641dd49123ebb437f63be13c5322edf29d191ec6cda554f67d821f6c65608e1" id=3a11059c-ddc5-40e7-bc3a-01c937f70320 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.608996542Z" level=warning msg="Stopping container 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=a4d26a89-9832-4103-839c-e821d98c07c9 name=/runtime.v1.RuntimeService/StopContainer
	Dec 18 23:37:51 addons-045387 conmon[4754]: conmon 465a72cb9856e29c6401 <ninfo>: container 4766 exited with status 137
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.753022232Z" level=info msg="Stopped container 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc: ingress-nginx/ingress-nginx-controller-7c6974c4d8-lfcl6/controller" id=a4d26a89-9832-4103-839c-e821d98c07c9 name=/runtime.v1.RuntimeService/StopContainer
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.753549620Z" level=info msg="Stopping pod sandbox: 8c9405021f4e58d8e2ede3756822ddf4a13a86581f3ea4200a12167d40475a42" id=9ef462ea-f431-45ab-8b98-e74542d31cdb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.757100539Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-DFGPQKAYBGIMPGYV - [0:0]\n:KUBE-HP-KSKTXB2W4Q2RM4V5 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-KSKTXB2W4Q2RM4V5\n-X KUBE-HP-DFGPQKAYBGIMPGYV\nCOMMIT\n"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.758925795Z" level=info msg="Closing host port tcp:80"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.758975288Z" level=info msg="Closing host port tcp:443"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.760665267Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.760700360Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.760870180Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-lfcl6 Namespace:ingress-nginx ID:8c9405021f4e58d8e2ede3756822ddf4a13a86581f3ea4200a12167d40475a42 UID:00091a19-a3d4-4433-b0f7-2b19660e1ae9 NetNS:/var/run/netns/84d486c5-2b39-4813-8542-fe1a5e74ea6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.761017214Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-lfcl6 from CNI network \"kindnet\" (type=ptp)"
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.781306079Z" level=info msg="Stopped pod sandbox: 8c9405021f4e58d8e2ede3756822ddf4a13a86581f3ea4200a12167d40475a42" id=9ef462ea-f431-45ab-8b98-e74542d31cdb name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.821537786Z" level=info msg="Removing container: ee34e8a8748821bf41cc54ff6a99f3841f6bc8f1bae8e2c215ab7dc02034943f" id=cdf915c7-9839-4764-a3af-b4047de10499 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.842014850Z" level=info msg="Removed container ee34e8a8748821bf41cc54ff6a99f3841f6bc8f1bae8e2c215ab7dc02034943f: default/hello-world-app-5d77478584-nhc27/hello-world-app" id=cdf915c7-9839-4764-a3af-b4047de10499 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.843799056Z" level=info msg="Removing container: 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc" id=f22c4d05-1798-4f80-8cf4-09513d7a2f76 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 18 23:37:51 addons-045387 crio[891]: time="2023-12-18 23:37:51.863723945Z" level=info msg="Removed container 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc: ingress-nginx/ingress-nginx-controller-7c6974c4d8-lfcl6/controller" id=f22c4d05-1798-4f80-8cf4-09513d7a2f76 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6624eed436023       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               6 seconds ago        Exited              hello-world-app           2                   6d8ddbc9af95b       hello-world-app-5d77478584-nhc27
	9a307351e4a3b       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9          About a minute ago   Running             headlamp                  0                   9ea1b0d2b6c52       headlamp-777fd4b855-qtmcx
	693b829690c53       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                2 minutes ago        Running             nginx                     0                   6e090f0f5a040       nginx
	a67d9a629c844       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   3 minutes ago        Running             gcp-auth                  0                   51c87af8290b5       gcp-auth-d4c87556c-vxk82
	7bfe20859acf4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               4 minutes ago        Running             storage-provisioner       0                   c56bcc836d81d       storage-provisioner
	c4af11fdd2bc2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               4 minutes ago        Running             coredns                   0                   4986a1ed2e044       coredns-5dd5756b68-cjc5m
	7812921bc4853       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                               4 minutes ago        Running             kube-proxy                0                   8caf8c4115af0       kube-proxy-7ltl6
	2ca2de4069275       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               4 minutes ago        Running             kindnet-cni               0                   340ccb8ced75c       kindnet-jtgls
	77a8bf0f58d6a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               5 minutes ago        Running             etcd                      0                   6e67584bba61c       etcd-addons-045387
	ca5bf915cc2e4       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                               5 minutes ago        Running             kube-scheduler            0                   36509f245f1c6       kube-scheduler-addons-045387
	ec5fa44c605d4       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                               5 minutes ago        Running             kube-apiserver            0                   a4d0dcaf00324       kube-apiserver-addons-045387
	5043bc42c4fa8       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                               5 minutes ago        Running             kube-controller-manager   0                   9527d513dfb4e       kube-controller-manager-addons-045387
	
	* 
	* ==> coredns [c4af11fdd2bc22d277b564886099bc81a955943fba6ac482af773169d102e974] <==
	* [INFO] 10.244.0.19:40154 - 10488 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038162s
	[INFO] 10.244.0.19:44169 - 53833 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002017385s
	[INFO] 10.244.0.19:40154 - 21635 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00134032s
	[INFO] 10.244.0.19:44169 - 18710 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001506022s
	[INFO] 10.244.0.19:40154 - 52738 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002060339s
	[INFO] 10.244.0.19:40154 - 31224 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000093423s
	[INFO] 10.244.0.19:44169 - 64835 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048524s
	[INFO] 10.244.0.19:47141 - 47090 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000142005s
	[INFO] 10.244.0.19:43987 - 49557 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006473s
	[INFO] 10.244.0.19:43987 - 42582 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069169s
	[INFO] 10.244.0.19:43987 - 5030 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005842s
	[INFO] 10.244.0.19:43987 - 62588 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058527s
	[INFO] 10.244.0.19:43987 - 58341 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059503s
	[INFO] 10.244.0.19:43987 - 40198 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004251s
	[INFO] 10.244.0.19:47141 - 10264 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057058s
	[INFO] 10.244.0.19:47141 - 9804 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041919s
	[INFO] 10.244.0.19:47141 - 852 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057969s
	[INFO] 10.244.0.19:47141 - 45391 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059766s
	[INFO] 10.244.0.19:47141 - 13173 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000597s
	[INFO] 10.244.0.19:43987 - 61508 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001246553s
	[INFO] 10.244.0.19:47141 - 60580 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001126997s
	[INFO] 10.244.0.19:43987 - 54777 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001004036s
	[INFO] 10.244.0.19:43987 - 40785 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100339s
	[INFO] 10.244.0.19:47141 - 57578 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000914749s
	[INFO] 10.244.0.19:47141 - 19811 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058962s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-045387
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-045387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=addons-045387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_32_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-045387
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-045387
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:37:54 +0000   Mon, 18 Dec 2023 23:32:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:37:54 +0000   Mon, 18 Dec 2023 23:32:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:37:54 +0000   Mon, 18 Dec 2023 23:32:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:37:54 +0000   Mon, 18 Dec 2023 23:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-045387
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 50cf8ffabba54948be19f87657333634
	  System UUID:                c51fab1a-7836-492a-8816-f265dd5915d1
	  Boot ID:                    a58889d6-3937-44de-bde4-55a8fc7b5b88
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-nhc27         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-vxk82                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  headlamp                    headlamp-777fd4b855-qtmcx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 coredns-5dd5756b68-cjc5m                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m56s
	  kube-system                 etcd-addons-045387                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m9s
	  kube-system                 kindnet-jtgls                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-addons-045387             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-controller-manager-addons-045387    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-7ltl6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-addons-045387             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m50s  kube-proxy       
	  Normal  Starting                 5m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m10s  kubelet          Node addons-045387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s  kubelet          Node addons-045387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s  kubelet          Node addons-045387 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m57s  node-controller  Node addons-045387 event: Registered Node addons-045387 in Controller
	  Normal  NodeReady                4m23s  kubelet          Node addons-045387 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001096] FS-Cache: O-key=[8] 'ddd1c90000000000'
	[  +0.000732] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000c62970d8
	[  +0.001084] FS-Cache: N-key=[8] 'ddd1c90000000000'
	[  +0.005854] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000bfea5ebf
	[  +0.001097] FS-Cache: O-key=[8] 'ddd1c90000000000'
	[  +0.000732] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000e43fe467
	[  +0.001125] FS-Cache: N-key=[8] 'ddd1c90000000000'
	[  +3.262375] FS-Cache: Duplicate cookie detected
	[  +0.000762] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001123] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000aee5d210
	[  +0.001163] FS-Cache: O-key=[8] 'dcd1c90000000000'
	[  +0.000758] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001038] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000089d507d
	[  +0.001119] FS-Cache: N-key=[8] 'dcd1c90000000000'
	[  +0.379151] FS-Cache: Duplicate cookie detected
	[  +0.000968] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001200] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000956c42fe
	[  +0.001113] FS-Cache: O-key=[8] 'e2d1c90000000000'
	[  +0.000761] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000c62970d8
	[  +0.001100] FS-Cache: N-key=[8] 'e2d1c90000000000'
	
	* 
	* ==> etcd [77a8bf0f58d6a60e1bafbd82f4da49b3eafeee9f63cbc493706f960ffa57f319] <==
	* {"level":"info","ts":"2023-12-18T23:33:02.107018Z","caller":"traceutil/trace.go:171","msg":"trace[1417078126] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"134.414858ms","start":"2023-12-18T23:33:01.972592Z","end":"2023-12-18T23:33:02.107007Z","steps":["trace[1417078126] 'process raft request'  (duration: 73.990587ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T23:33:02.233375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.798884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" ","response":"range_response_count:1 size:3635"}
	{"level":"info","ts":"2023-12-18T23:33:02.240613Z","caller":"traceutil/trace.go:171","msg":"trace[580953553] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5dd5756b68; range_end:; response_count:1; response_revision:358; }","duration":"268.05885ms","start":"2023-12-18T23:33:01.972533Z","end":"2023-12-18T23:33:02.240592Z","steps":["trace[580953553] 'agreement among raft nodes before linearized reading'  (duration: 74.146385ms)","trace[580953553] 'get authentication metadata'  (duration: 129.056192ms)","trace[580953553] 'range keys from in-memory index tree'  (duration: 57.549546ms)"],"step_count":3}
	{"level":"info","ts":"2023-12-18T23:33:02.81514Z","caller":"traceutil/trace.go:171","msg":"trace[792187170] linearizableReadLoop","detail":"{readStateIndex:372; appliedIndex:371; }","duration":"129.875081ms","start":"2023-12-18T23:33:02.685252Z","end":"2023-12-18T23:33:02.815127Z","steps":["trace[792187170] 'read index received'  (duration: 129.758586ms)","trace[792187170] 'applied index is now lower than readState.Index'  (duration: 115.946µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T23:33:02.824567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.75946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-12-18T23:33:02.824617Z","caller":"traceutil/trace.go:171","msg":"trace[974498707] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:361; }","duration":"139.817823ms","start":"2023-12-18T23:33:02.684789Z","end":"2023-12-18T23:33:02.824607Z","steps":["trace[974498707] 'agreement among raft nodes before linearized reading'  (duration: 139.737101ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T23:33:02.846401Z","caller":"traceutil/trace.go:171","msg":"trace[41087327] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"130.605324ms","start":"2023-12-18T23:33:02.684681Z","end":"2023-12-18T23:33:02.815286Z","steps":["trace[41087327] 'process raft request'  (duration: 130.360157ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T23:33:03.292769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.632962ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025905764551535 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kindnet-jtgls.17a2111319cbf8c1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-jtgls.17a2111319cbf8c1\" value_size:690 lease:8128025905764551197 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-12-18T23:33:03.292934Z","caller":"traceutil/trace.go:171","msg":"trace[360785556] linearizableReadLoop","detail":"{readStateIndex:374; appliedIndex:373; }","duration":"365.520687ms","start":"2023-12-18T23:33:02.927402Z","end":"2023-12-18T23:33:03.292922Z","steps":["trace[360785556] 'read index received'  (duration: 9.20655ms)","trace[360785556] 'applied index is now lower than readState.Index'  (duration: 356.312947ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T23:33:03.293171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.780492ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" ","response":"range_response_count:36 size:7511"}
	{"level":"info","ts":"2023-12-18T23:33:03.293247Z","caller":"traceutil/trace.go:171","msg":"trace[2061471402] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/; range_end:/registry/serviceaccounts/kube-system0; response_count:36; response_revision:363; }","duration":"365.842736ms","start":"2023-12-18T23:33:02.927374Z","end":"2023-12-18T23:33:03.293217Z","steps":["trace[2061471402] 'agreement among raft nodes before linearized reading'  (duration: 365.632054ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T23:33:03.2933Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T23:33:02.927354Z","time spent":"365.936627ms","remote":"127.0.0.1:46190","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":36,"response size":7535,"request content":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" "}
	{"level":"info","ts":"2023-12-18T23:33:03.294089Z","caller":"traceutil/trace.go:171","msg":"trace[606191995] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"369.099397ms","start":"2023-12-18T23:33:02.924975Z","end":"2023-12-18T23:33:03.294074Z","steps":["trace[606191995] 'process raft request'  (duration: 134.106652ms)","trace[606191995] 'compare'  (duration: 233.320341ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T23:33:03.294158Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T23:33:02.924963Z","time spent":"369.164127ms","remote":"127.0.0.1:46068","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":767,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet-jtgls.17a2111319cbf8c1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet-jtgls.17a2111319cbf8c1\" value_size:690 lease:8128025905764551197 >> failure:<>"}
	{"level":"info","ts":"2023-12-18T23:33:03.884394Z","caller":"traceutil/trace.go:171","msg":"trace[936732394] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"100.051566ms","start":"2023-12-18T23:33:03.784316Z","end":"2023-12-18T23:33:03.884367Z","steps":["trace[936732394] 'process raft request'  (duration: 100.00789ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T23:33:03.894782Z","caller":"traceutil/trace.go:171","msg":"trace[171451422] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"146.96159ms","start":"2023-12-18T23:33:03.747797Z","end":"2023-12-18T23:33:03.894759Z","steps":["trace[171451422] 'process raft request'  (duration: 54.524933ms)","trace[171451422] 'compare'  (duration: 81.84597ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T23:33:03.895022Z","caller":"traceutil/trace.go:171","msg":"trace[880815505] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"110.801776ms","start":"2023-12-18T23:33:03.784212Z","end":"2023-12-18T23:33:03.895014Z","steps":["trace[880815505] 'process raft request'  (duration: 100.057162ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T23:33:03.932308Z","caller":"traceutil/trace.go:171","msg":"trace[499621222] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"147.536099ms","start":"2023-12-18T23:33:03.784749Z","end":"2023-12-18T23:33:03.932285Z","steps":["trace[499621222] 'process raft request'  (duration: 145.387113ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T23:33:03.938983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.599742ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T23:33:03.939126Z","caller":"traceutil/trace.go:171","msg":"trace[941159767] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:368; }","duration":"136.753808ms","start":"2023-12-18T23:33:03.802361Z","end":"2023-12-18T23:33:03.939115Z","steps":["trace[941159767] 'agreement among raft nodes before linearized reading'  (duration: 136.51125ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T23:33:03.976584Z","caller":"traceutil/trace.go:171","msg":"trace[1063976771] linearizableReadLoop","detail":"{readStateIndex:379; appliedIndex:375; }","duration":"121.6294ms","start":"2023-12-18T23:33:03.80887Z","end":"2023-12-18T23:33:03.930499Z","steps":["trace[1063976771] 'read index received'  (duration: 69.716204ms)","trace[1063976771] 'applied index is now lower than readState.Index'  (duration: 51.912483ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T23:33:06.31426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.190911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/ingress-nginx/\" range_end:\"/registry/resourcequotas/ingress-nginx0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T23:33:06.31432Z","caller":"traceutil/trace.go:171","msg":"trace[629926519] range","detail":"{range_begin:/registry/resourcequotas/ingress-nginx/; range_end:/registry/resourcequotas/ingress-nginx0; response_count:0; response_revision:447; }","duration":"102.276663ms","start":"2023-12-18T23:33:06.212032Z","end":"2023-12-18T23:33:06.314308Z","steps":["trace[629926519] 'agreement among raft nodes before linearized reading'  (duration: 102.161611ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T23:33:06.314626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.609132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T23:33:06.314941Z","caller":"traceutil/trace.go:171","msg":"trace[3482752] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:447; }","duration":"102.923886ms","start":"2023-12-18T23:33:06.212008Z","end":"2023-12-18T23:33:06.314932Z","steps":["trace[3482752] 'agreement among raft nodes before linearized reading'  (duration: 102.593411ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a67d9a629c8447c98c902bd4bd66138a70cfcc3e6774d9b38e7c3399ac206bfd] <==
	* 2023/12/18 23:34:17 GCP Auth Webhook started!
	2023/12/18 23:34:46 Ready to marshal response ...
	2023/12/18 23:34:46 Ready to write response ...
	2023/12/18 23:34:57 Ready to marshal response ...
	2023/12/18 23:34:57 Ready to write response ...
	2023/12/18 23:35:10 Ready to marshal response ...
	2023/12/18 23:35:10 Ready to write response ...
	2023/12/18 23:35:21 Ready to marshal response ...
	2023/12/18 23:35:21 Ready to write response ...
	2023/12/18 23:35:38 Ready to marshal response ...
	2023/12/18 23:35:38 Ready to write response ...
	2023/12/18 23:35:39 Ready to marshal response ...
	2023/12/18 23:35:39 Ready to write response ...
	2023/12/18 23:35:48 Ready to marshal response ...
	2023/12/18 23:35:48 Ready to write response ...
	2023/12/18 23:36:46 Ready to marshal response ...
	2023/12/18 23:36:46 Ready to write response ...
	2023/12/18 23:36:46 Ready to marshal response ...
	2023/12/18 23:36:46 Ready to write response ...
	2023/12/18 23:36:46 Ready to marshal response ...
	2023/12/18 23:36:46 Ready to write response ...
	2023/12/18 23:37:30 Ready to marshal response ...
	2023/12/18 23:37:30 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:37:57 up  4:20,  0 users,  load average: 0.56, 1.66, 2.50
	Linux addons-045387 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2ca2de4069275903bde893607639ec8c8402ce5cbcb279e853691abc7a305af5] <==
	* I1218 23:35:54.591340       1 main.go:227] handling current node
	I1218 23:36:04.595713       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:04.595746       1 main.go:227] handling current node
	I1218 23:36:14.607502       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:14.607532       1 main.go:227] handling current node
	I1218 23:36:24.611601       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:24.611632       1 main.go:227] handling current node
	I1218 23:36:34.619848       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:34.619881       1 main.go:227] handling current node
	I1218 23:36:44.632456       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:44.632486       1 main.go:227] handling current node
	I1218 23:36:54.645691       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:36:54.645720       1 main.go:227] handling current node
	I1218 23:37:04.650418       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:04.650451       1 main.go:227] handling current node
	I1218 23:37:14.667150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:14.667257       1 main.go:227] handling current node
	I1218 23:37:24.670952       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:24.670977       1 main.go:227] handling current node
	I1218 23:37:34.682801       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:34.682827       1 main.go:227] handling current node
	I1218 23:37:44.695476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:44.695615       1 main.go:227] handling current node
	I1218 23:37:54.700472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:37:54.700498       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [ec5fa44c605d457f214f9f0305dbdbfe9de864f1a966cce47a57bbedcbf54d54] <==
	* I1218 23:35:38.290042       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.290213       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.301143       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.301205       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.311621       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.311670       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.329878       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.329929       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.336203       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.336326       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.351349       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.351401       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 23:35:38.367423       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 23:35:38.367473       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1218 23:35:39.312723       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1218 23:35:39.368176       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1218 23:35:39.374638       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1218 23:35:41.486604       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1218 23:35:49.777288       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1218 23:35:49.780565       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1218 23:35:49.783908       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1218 23:36:04.786114       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1218 23:36:46.834260       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.38.52"}
	I1218 23:37:31.001607       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.11.248"}
	E1218 23:37:47.833623       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4008a95890), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x40006b9860), ResponseWriter:(*httpsnoop.rw)(0x40006b9860), Flusher:(*httpsnoop.rw)(0x40006b9860), CloseNotifier:(*httpsnoop.rw)(0x40006b9860), Pusher:(*httpsnoop.rw)(0x40006b9860)}}, encoder:(*versioning.codec)(0x40087e7b80), memAllocator:(*runtime.Allocator)(0x4005e99950)})
	
	* 
	* ==> kube-controller-manager [5043bc42c4fa828f18d8c27e0aa541e2bd605ecbb52c086d8e2475609ec5594f] <==
	* W1218 23:37:01.099494       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:01.099531       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:37:05.903521       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:05.903555       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:37:25.314118       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:25.314152       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 23:37:30.711001       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1218 23:37:30.731767       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-nhc27"
	I1218 23:37:30.745201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.918835ms"
	I1218 23:37:30.763073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.625635ms"
	I1218 23:37:30.785124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.895268ms"
	I1218 23:37:30.785223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.971µs"
	I1218 23:37:33.792646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="53.078µs"
	I1218 23:37:34.796022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="120.745µs"
	I1218 23:37:35.794130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.56µs"
	W1218 23:37:37.821818       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:37.821852       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:37:41.742097       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:41.742130       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 23:37:41.891183       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 23:37:41.891218       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 23:37:48.574069       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1218 23:37:48.579705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="6.441µs"
	I1218 23:37:48.582478       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1218 23:37:51.831175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.275µs"
	
	* 
	* ==> kube-proxy [7812921bc48533678ce40bd0da97678f42c0257205cdf0504e35df13fa84d7cd] <==
	* I1218 23:33:06.551426       1 server_others.go:69] "Using iptables proxy"
	I1218 23:33:06.810538       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1218 23:33:07.022794       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1218 23:33:07.030595       1 server_others.go:152] "Using iptables Proxier"
	I1218 23:33:07.030634       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1218 23:33:07.030641       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1218 23:33:07.030694       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 23:33:07.031047       1 server.go:846] "Version info" version="v1.28.4"
	I1218 23:33:07.031066       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 23:33:07.044949       1 config.go:188] "Starting service config controller"
	I1218 23:33:07.045077       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 23:33:07.045132       1 config.go:97] "Starting endpoint slice config controller"
	I1218 23:33:07.045163       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 23:33:07.045750       1 config.go:315] "Starting node config controller"
	I1218 23:33:07.045817       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 23:33:07.145432       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 23:33:07.145513       1 shared_informer.go:318] Caches are synced for service config
	I1218 23:33:07.149236       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ca5bf915cc2e4cf957bcfb0e0436aa3bc88ddf0c96d21ea58897b89843bf62ab] <==
	* I1218 23:32:44.132325       1 serving.go:348] Generated self-signed cert in-memory
	I1218 23:32:46.924812       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1218 23:32:46.924843       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 23:32:46.929127       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1218 23:32:46.929220       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1218 23:32:46.929497       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1218 23:32:46.929549       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:32:46.929869       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1218 23:32:46.929914       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1218 23:32:46.930621       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1218 23:32:46.932887       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1218 23:32:47.029591       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1218 23:32:47.029732       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:32:47.030625       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 18 23:37:48 addons-045387 kubelet[1354]: E1218 23:37:48.144133    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a83903fe88ac48e9090a687d601bf115708215062ea56eee934bd37956e90099/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a83903fe88ac48e9090a687d601bf115708215062ea56eee934bd37956e90099/diff: no such file or directory, extraDiskErr: <nil>
	Dec 18 23:37:48 addons-045387 kubelet[1354]: E1218 23:37:48.150714    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5213706ebf04793f2c11f7fee5ca179672c142a05856f5329a7ef3a1a41a4cbb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5213706ebf04793f2c11f7fee5ca179672c142a05856f5329a7ef3a1a41a4cbb/diff: no such file or directory, extraDiskErr: <nil>
	Dec 18 23:37:48 addons-045387 kubelet[1354]: E1218 23:37:48.150754    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7eae79f771e14819220d956157e0744f869e0ab6c51a13235ce9b3adba84ba6c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7eae79f771e14819220d956157e0744f869e0ab6c51a13235ce9b3adba84ba6c/diff: no such file or directory, extraDiskErr: <nil>
	Dec 18 23:37:48 addons-045387 kubelet[1354]: E1218 23:37:48.152869    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8b02ff103bd0921f8006a44976f3c5b800ec0bdc978658f0cab1dd2929ba5689/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8b02ff103bd0921f8006a44976f3c5b800ec0bdc978658f0cab1dd2929ba5689/diff: no such file or directory, extraDiskErr: <nil>
	Dec 18 23:37:49 addons-045387 kubelet[1354]: E1218 23:37:49.125993    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f8fac25fd55802b30486b4c3cbc6280a6db8c5fbd21cf081940b040dbbc328ff/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f8fac25fd55802b30486b4c3cbc6280a6db8c5fbd21cf081940b040dbbc328ff/diff: no such file or directory, extraDiskErr: <nil>
	Dec 18 23:37:49 addons-045387 kubelet[1354]: I1218 23:37:49.834266    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="14c2a57f-3041-48ad-aa97-442729b59c40" path="/var/lib/kubelet/pods/14c2a57f-3041-48ad-aa97-442729b59c40/volumes"
	Dec 18 23:37:49 addons-045387 kubelet[1354]: I1218 23:37:49.834785    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8af6cd95-face-47a2-9b55-23796db71020" path="/var/lib/kubelet/pods/8af6cd95-face-47a2-9b55-23796db71020/volumes"
	Dec 18 23:37:49 addons-045387 kubelet[1354]: I1218 23:37:49.835174    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e3eefe4a-6bea-41e0-84a5-7c208cd0d3fb" path="/var/lib/kubelet/pods/e3eefe4a-6bea-41e0-84a5-7c208cd0d3fb/volumes"
	Dec 18 23:37:50 addons-045387 kubelet[1354]: I1218 23:37:50.832579    1354 scope.go:117] "RemoveContainer" containerID="ee34e8a8748821bf41cc54ff6a99f3841f6bc8f1bae8e2c215ab7dc02034943f"
	Dec 18 23:37:50 addons-045387 kubelet[1354]: I1218 23:37:50.907546    1354 scope.go:117] "RemoveContainer" containerID="ebb25751554f4f3cf2dd857b2dccd52b61f07fd0db231b81be01d0bb9679b5f6"
	Dec 18 23:37:50 addons-045387 kubelet[1354]: I1218 23:37:50.939109    1354 scope.go:117] "RemoveContainer" containerID="16fc2157ab17b8fba4aae8eef15121aeb3778c3264d1faefacdc1e444c99d76d"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.815124    1354 scope.go:117] "RemoveContainer" containerID="ee34e8a8748821bf41cc54ff6a99f3841f6bc8f1bae8e2c215ab7dc02034943f"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.815404    1354 scope.go:117] "RemoveContainer" containerID="6624eed436023e5fd8312ea82e6c1adffd93b909930028c804878aa9a0559fe7"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: E1218 23:37:51.815666    1354 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-nhc27_default(31088273-51b1-437b-95d3-a2d44e334776)\"" pod="default/hello-world-app-5d77478584-nhc27" podUID="31088273-51b1-437b-95d3-a2d44e334776"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.842600    1354 scope.go:117] "RemoveContainer" containerID="465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.848228    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00091a19-a3d4-4433-b0f7-2b19660e1ae9-webhook-cert\") pod \"00091a19-a3d4-4433-b0f7-2b19660e1ae9\" (UID: \"00091a19-a3d4-4433-b0f7-2b19660e1ae9\") "
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.848287    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2nhj\" (UniqueName: \"kubernetes.io/projected/00091a19-a3d4-4433-b0f7-2b19660e1ae9-kube-api-access-p2nhj\") pod \"00091a19-a3d4-4433-b0f7-2b19660e1ae9\" (UID: \"00091a19-a3d4-4433-b0f7-2b19660e1ae9\") "
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.860392    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00091a19-a3d4-4433-b0f7-2b19660e1ae9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "00091a19-a3d4-4433-b0f7-2b19660e1ae9" (UID: "00091a19-a3d4-4433-b0f7-2b19660e1ae9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.863023    1354 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00091a19-a3d4-4433-b0f7-2b19660e1ae9-kube-api-access-p2nhj" (OuterVolumeSpecName: "kube-api-access-p2nhj") pod "00091a19-a3d4-4433-b0f7-2b19660e1ae9" (UID: "00091a19-a3d4-4433-b0f7-2b19660e1ae9"). InnerVolumeSpecName "kube-api-access-p2nhj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.864038    1354 scope.go:117] "RemoveContainer" containerID="465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: E1218 23:37:51.864422    1354 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc\": container with ID starting with 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc not found: ID does not exist" containerID="465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.864464    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc"} err="failed to get container status \"465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc\": rpc error: code = NotFound desc = could not find container \"465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc\": container with ID starting with 465a72cb9856e29c6401cc7820cbf81787caaf71946e848e1d353070544756dc not found: ID does not exist"
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.949786    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p2nhj\" (UniqueName: \"kubernetes.io/projected/00091a19-a3d4-4433-b0f7-2b19660e1ae9-kube-api-access-p2nhj\") on node \"addons-045387\" DevicePath \"\""
	Dec 18 23:37:51 addons-045387 kubelet[1354]: I1218 23:37:51.949837    1354 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00091a19-a3d4-4433-b0f7-2b19660e1ae9-webhook-cert\") on node \"addons-045387\" DevicePath \"\""
	Dec 18 23:37:53 addons-045387 kubelet[1354]: I1218 23:37:53.833555    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="00091a19-a3d4-4433-b0f7-2b19660e1ae9" path="/var/lib/kubelet/pods/00091a19-a3d4-4433-b0f7-2b19660e1ae9/volumes"
	
	* 
	* ==> storage-provisioner [7bfe20859acf4f4da2510d92e4744eb0ce0db3f0ba0308b3fce1d7438a48399a] <==
	* I1218 23:33:35.434696       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 23:33:35.671512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 23:33:35.683517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 23:33:35.706545       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 23:33:35.707770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-045387_66bb2a7a-a760-474d-a24c-a0e4fd60e605!
	I1218 23:33:35.708670       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87a77430-2002-4290-96e1-b050bcb35701", APIVersion:"v1", ResourceVersion:"868", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-045387_66bb2a7a-a760-474d-a24c-a0e4fd60e605 became leader
	I1218 23:33:35.808428       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-045387_66bb2a7a-a760-474d-a24c-a0e4fd60e605!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-045387 -n addons-045387
helpers_test.go:261: (dbg) Run:  kubectl --context addons-045387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-715187 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-715187 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.05541628s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-715187 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-715187 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f882d564-f70f-4fcc-874b-cc87f03be457] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f882d564-f70f-4fcc-874b-cc87f03be457] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.00370829s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1218 23:45:03.390815  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:46:41.572697  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.578029  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.588360  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.608630  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.648883  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.729169  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:41.889605  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:42.210304  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:42.850664  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:44.131724  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:46.692895  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:46:51.813085  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:47:02.053252  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-715187 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.571503321s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-715187 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1218 23:47:22.533485  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.021431965s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons disable ingress-dns --alsologtostderr -v=1: (2.577601066s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons disable ingress --alsologtostderr -v=1: (7.633935516s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-715187
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-715187:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98",
	        "Created": "2023-12-18T23:43:16.422838498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 845621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:43:16.731253795Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98/hosts",
	        "LogPath": "/var/lib/docker/containers/ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98/ff664531344995b115d152cd39dba8da6f7e54e185cfb5a8cce36581e8920d98-json.log",
	        "Name": "/ingress-addon-legacy-715187",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-715187:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-715187",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/34d8e212575be67904fb22073e59dea4147230509ce37e8d7e1a99a7f0834928-init/diff:/var/lib/docker/overlay2/db874852d391376facd52e960a3e68faa10fa2be9d9e14dbf2dda2d1f908e37e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34d8e212575be67904fb22073e59dea4147230509ce37e8d7e1a99a7f0834928/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34d8e212575be67904fb22073e59dea4147230509ce37e8d7e1a99a7f0834928/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34d8e212575be67904fb22073e59dea4147230509ce37e8d7e1a99a7f0834928/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-715187",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-715187/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-715187",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-715187",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-715187",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b9b1f3f275e80f4389f27edb6e400c782c796cd98c5bde61f08765d6ddee41a4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b9b1f3f275e8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-715187": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff6645313449",
	                        "ingress-addon-legacy-715187"
	                    ],
	                    "NetworkID": "d6647efbc43fcf2a662c19befa4096c1d3da51cd700ee46a634ab2b01bd5f8ab",
	                    "EndpointID": "902990b93a992b9a1529f48c741a5fa0b04e75655bd6b323f9506530e6906154",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-715187 -n ingress-addon-legacy-715187
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-715187 logs -n 25: (1.432993607s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-348956 image load --daemon                                  | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-348956               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956 image ls                                             | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	| image          | functional-348956 image save                                           | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-348956               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956 image rm                                             | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-348956               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956 image ls                                             | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	| image          | functional-348956 image load                                           | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956 image ls                                             | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	| image          | functional-348956 image save --daemon                                  | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-348956               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-348956                                                      | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-348956 ssh pgrep                                            | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-348956 image build -t                                       | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	|                | localhost/my-image:functional-348956                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-348956 image ls                                             | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	| delete         | -p functional-348956                                                   | functional-348956           | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:42 UTC |
	| start          | -p ingress-addon-legacy-715187                                         | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:42 UTC | 18 Dec 23 23:44 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-715187                                            | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:44 UTC | 18 Dec 23 23:44 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-715187                                            | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:44 UTC | 18 Dec 23 23:44 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-715187                                            | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:45 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-715187 ip                                         | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:47 UTC | 18 Dec 23 23:47 UTC |
	| addons         | ingress-addon-legacy-715187                                            | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:47 UTC | 18 Dec 23 23:47 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-715187                                            | ingress-addon-legacy-715187 | jenkins | v1.32.0 | 18 Dec 23 23:47 UTC | 18 Dec 23 23:47 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:42:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:42:51.736388  845164 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:42:51.736572  845164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:51.736581  845164 out.go:309] Setting ErrFile to fd 2...
	I1218 23:42:51.736587  845164 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:51.736850  845164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:42:51.737275  845164 out.go:303] Setting JSON to false
	I1218 23:42:51.738222  845164 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15914,"bootTime":1702927058,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:42:51.738297  845164 start.go:138] virtualization:  
	I1218 23:42:51.741332  845164 out.go:177] * [ingress-addon-legacy-715187] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:42:51.743865  845164 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:42:51.743928  845164 notify.go:220] Checking for updates...
	I1218 23:42:51.747433  845164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:42:51.749738  845164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:42:51.751477  845164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:42:51.753205  845164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:42:51.754902  845164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:42:51.757105  845164 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:42:51.780220  845164 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:42:51.780351  845164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:42:51.861677  845164 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-18 23:42:51.852154779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:42:51.861776  845164 docker.go:295] overlay module found
	I1218 23:42:51.864037  845164 out.go:177] * Using the docker driver based on user configuration
	I1218 23:42:51.865763  845164 start.go:298] selected driver: docker
	I1218 23:42:51.865783  845164 start.go:902] validating driver "docker" against <nil>
	I1218 23:42:51.865796  845164 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:42:51.866429  845164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:42:51.931500  845164 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:36 SystemTime:2023-12-18 23:42:51.922493884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:42:51.931661  845164 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:42:51.931974  845164 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:42:51.933957  845164 out.go:177] * Using Docker driver with root privileges
	I1218 23:42:51.935749  845164 cni.go:84] Creating CNI manager for ""
	I1218 23:42:51.935771  845164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:42:51.935786  845164 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:42:51.935800  845164 start_flags.go:323] config:
	{Name:ingress-addon-legacy-715187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-715187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:42:51.937768  845164 out.go:177] * Starting control plane node ingress-addon-legacy-715187 in cluster ingress-addon-legacy-715187
	I1218 23:42:51.939442  845164 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:42:51.941225  845164 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:42:51.943016  845164 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1218 23:42:51.943113  845164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:42:51.960144  845164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1218 23:42:51.960166  845164 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1218 23:42:52.015213  845164 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1218 23:42:52.015239  845164 cache.go:56] Caching tarball of preloaded images
	I1218 23:42:52.015407  845164 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1218 23:42:52.017526  845164 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1218 23:42:52.019760  845164 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1218 23:42:52.141982  845164 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1218 23:43:08.603007  845164 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1218 23:43:08.603121  845164 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1218 23:43:09.793924  845164 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1218 23:43:09.794304  845164 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/config.json ...
	I1218 23:43:09.794339  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/config.json: {Name:mk6a79c758bd558ec70c42db25d04678856d6889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:09.794526  845164 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:43:09.794573  845164 start.go:365] acquiring machines lock for ingress-addon-legacy-715187: {Name:mk8b9e58df9f32fe2a91b2287199064187892008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:43:09.794629  845164 start.go:369] acquired machines lock for "ingress-addon-legacy-715187" in 46.334µs
	I1218 23:43:09.794655  845164 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-715187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-715187 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:43:09.794723  845164 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:43:09.797108  845164 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1218 23:43:09.797340  845164 start.go:159] libmachine.API.Create for "ingress-addon-legacy-715187" (driver="docker")
	I1218 23:43:09.797363  845164 client.go:168] LocalClient.Create starting
	I1218 23:43:09.797428  845164 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem
	I1218 23:43:09.797465  845164 main.go:141] libmachine: Decoding PEM data...
	I1218 23:43:09.797485  845164 main.go:141] libmachine: Parsing certificate...
	I1218 23:43:09.797543  845164 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem
	I1218 23:43:09.797564  845164 main.go:141] libmachine: Decoding PEM data...
	I1218 23:43:09.797577  845164 main.go:141] libmachine: Parsing certificate...
	I1218 23:43:09.797935  845164 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-715187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:43:09.814929  845164 cli_runner.go:211] docker network inspect ingress-addon-legacy-715187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:43:09.815012  845164 network_create.go:281] running [docker network inspect ingress-addon-legacy-715187] to gather additional debugging logs...
	I1218 23:43:09.815032  845164 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-715187
	W1218 23:43:09.832148  845164 cli_runner.go:211] docker network inspect ingress-addon-legacy-715187 returned with exit code 1
	I1218 23:43:09.832182  845164 network_create.go:284] error running [docker network inspect ingress-addon-legacy-715187]: docker network inspect ingress-addon-legacy-715187: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-715187 not found
	I1218 23:43:09.832194  845164 network_create.go:286] output of [docker network inspect ingress-addon-legacy-715187]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-715187 not found
	
	** /stderr **
	I1218 23:43:09.832290  845164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:43:09.850233  845164 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40020523d0}
	I1218 23:43:09.850274  845164 network_create.go:124] attempt to create docker network ingress-addon-legacy-715187 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 23:43:09.850336  845164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-715187 ingress-addon-legacy-715187
	I1218 23:43:09.918577  845164 network_create.go:108] docker network ingress-addon-legacy-715187 192.168.49.0/24 created
	I1218 23:43:09.918611  845164 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-715187" container
	I1218 23:43:09.918690  845164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:43:09.934261  845164 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-715187 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-715187 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:43:09.952281  845164 oci.go:103] Successfully created a docker volume ingress-addon-legacy-715187
	I1218 23:43:09.952370  845164 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-715187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-715187 --entrypoint /usr/bin/test -v ingress-addon-legacy-715187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:43:11.429478  845164 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-715187-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-715187 --entrypoint /usr/bin/test -v ingress-addon-legacy-715187:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.477055169s)
	I1218 23:43:11.429513  845164 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-715187
	I1218 23:43:11.429541  845164 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1218 23:43:11.429562  845164 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:43:11.429645  845164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-715187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:43:16.339997  845164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-715187:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.9102777s)
	I1218 23:43:16.340028  845164 kic.go:203] duration metric: took 4.910464 seconds to extract preloaded images to volume
	W1218 23:43:16.340176  845164 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:43:16.340285  845164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:43:16.406976  845164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-715187 --name ingress-addon-legacy-715187 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-715187 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-715187 --network ingress-addon-legacy-715187 --ip 192.168.49.2 --volume ingress-addon-legacy-715187:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:43:16.738848  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Running}}
	I1218 23:43:16.763325  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:43:16.786030  845164 cli_runner.go:164] Run: docker exec ingress-addon-legacy-715187 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:43:16.853791  845164 oci.go:144] the created container "ingress-addon-legacy-715187" has a running status.
	I1218 23:43:16.853816  845164 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa...
	I1218 23:43:17.306896  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1218 23:43:17.306966  845164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:43:17.348108  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:43:17.393167  845164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:43:17.393188  845164 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-715187 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:43:17.463344  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:43:17.494665  845164 machine.go:88] provisioning docker machine ...
	I1218 23:43:17.494693  845164 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-715187"
	I1218 23:43:17.494837  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:17.521110  845164 main.go:141] libmachine: Using SSH client type: native
	I1218 23:43:17.521529  845164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1218 23:43:17.521542  845164 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-715187 && echo "ingress-addon-legacy-715187" | sudo tee /etc/hostname
	I1218 23:43:17.721535  845164 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-715187
	
	I1218 23:43:17.721610  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:17.740901  845164 main.go:141] libmachine: Using SSH client type: native
	I1218 23:43:17.741324  845164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1218 23:43:17.741344  845164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-715187' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-715187/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-715187' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:43:17.897024  845164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:43:17.897049  845164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1218 23:43:17.897077  845164 ubuntu.go:177] setting up certificates
	I1218 23:43:17.897086  845164 provision.go:83] configureAuth start
	I1218 23:43:17.897145  845164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-715187
	I1218 23:43:17.924219  845164 provision.go:138] copyHostCerts
	I1218 23:43:17.924334  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:43:17.924368  845164 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1218 23:43:17.924376  845164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:43:17.924516  845164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1218 23:43:17.924667  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:43:17.924690  845164 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1218 23:43:17.924695  845164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:43:17.924766  845164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1218 23:43:17.924862  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:43:17.924883  845164 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1218 23:43:17.924923  845164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:43:17.924960  845164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1218 23:43:17.925180  845164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-715187 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-715187]
	I1218 23:43:18.222432  845164 provision.go:172] copyRemoteCerts
	I1218 23:43:18.222513  845164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:43:18.222556  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:18.240685  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:43:18.346751  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 23:43:18.346811  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 23:43:18.376377  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 23:43:18.376493  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1218 23:43:18.405857  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 23:43:18.405970  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 23:43:18.434220  845164 provision.go:86] duration metric: configureAuth took 537.120602ms
	I1218 23:43:18.434247  845164 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:43:18.434448  845164 config.go:182] Loaded profile config "ingress-addon-legacy-715187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1218 23:43:18.434559  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:18.453148  845164 main.go:141] libmachine: Using SSH client type: native
	I1218 23:43:18.453570  845164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1218 23:43:18.453591  845164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1218 23:43:18.749543  845164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1218 23:43:18.749564  845164 machine.go:91] provisioned docker machine in 1.254879454s
	I1218 23:43:18.749574  845164 client.go:171] LocalClient.Create took 8.952205588s
	I1218 23:43:18.749587  845164 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-715187" took 8.952246433s
	I1218 23:43:18.749595  845164 start.go:300] post-start starting for "ingress-addon-legacy-715187" (driver="docker")
	I1218 23:43:18.749604  845164 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:43:18.749666  845164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:43:18.749710  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:18.768441  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:43:18.875333  845164 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:43:18.879551  845164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:43:18.879588  845164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:43:18.879600  845164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:43:18.879608  845164 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:43:18.879622  845164 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1218 23:43:18.879689  845164 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1218 23:43:18.879792  845164 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1218 23:43:18.879804  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /etc/ssl/certs/8173782.pem
	I1218 23:43:18.879922  845164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 23:43:18.890552  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:43:18.918713  845164 start.go:303] post-start completed in 169.102524ms
	I1218 23:43:18.919084  845164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-715187
	I1218 23:43:18.936349  845164 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/config.json ...
	I1218 23:43:18.936662  845164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:43:18.936709  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:18.954173  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:43:19.058058  845164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:43:19.063810  845164 start.go:128] duration metric: createHost completed in 9.269071181s
	I1218 23:43:19.063837  845164 start.go:83] releasing machines lock for "ingress-addon-legacy-715187", held for 9.269192797s
	I1218 23:43:19.063906  845164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-715187
	I1218 23:43:19.081105  845164 ssh_runner.go:195] Run: cat /version.json
	I1218 23:43:19.081117  845164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:43:19.081161  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:19.081189  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:43:19.106679  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:43:19.106922  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:43:19.338750  845164 ssh_runner.go:195] Run: systemctl --version
	I1218 23:43:19.344135  845164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1218 23:43:19.490789  845164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:43:19.496280  845164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:43:19.519853  845164 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:43:19.519960  845164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:43:19.555438  845164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:43:19.555466  845164 start.go:475] detecting cgroup driver to use...
	I1218 23:43:19.555499  845164 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:43:19.555548  845164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 23:43:19.573331  845164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 23:43:19.587068  845164 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:43:19.587168  845164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:43:19.607023  845164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:43:19.623895  845164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:43:19.725232  845164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:43:19.831017  845164 docker.go:219] disabling docker service ...
	I1218 23:43:19.831137  845164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:43:19.852166  845164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:43:19.866263  845164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:43:19.967146  845164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:43:20.094059  845164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:43:20.110866  845164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:43:20.134304  845164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1218 23:43:20.134461  845164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:43:20.147751  845164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1218 23:43:20.147860  845164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:43:20.160406  845164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:43:20.173273  845164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:43:20.186170  845164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:43:20.197684  845164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:43:20.208393  845164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:43:20.218952  845164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:43:20.317082  845164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1218 23:43:20.439847  845164 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1218 23:43:20.440057  845164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1218 23:43:20.445218  845164 start.go:543] Will wait 60s for crictl version
	I1218 23:43:20.445288  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:20.449808  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:43:20.499156  845164 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1218 23:43:20.499324  845164 ssh_runner.go:195] Run: crio --version
	I1218 23:43:20.541167  845164 ssh_runner.go:195] Run: crio --version
	I1218 23:43:20.588205  845164 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1218 23:43:20.590015  845164 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-715187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:43:20.610024  845164 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 23:43:20.614650  845164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:43:20.627693  845164 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1218 23:43:20.627768  845164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:43:20.677149  845164 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1218 23:43:20.677224  845164 ssh_runner.go:195] Run: which lz4
	I1218 23:43:20.681619  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1218 23:43:20.681717  845164 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 23:43:20.685856  845164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 23:43:20.685894  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1218 23:43:23.053965  845164 crio.go:444] Took 2.372260 seconds to copy over tarball
	I1218 23:43:23.054051  845164 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 23:43:25.735228  845164 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.681145986s)
	I1218 23:43:25.735254  845164 crio.go:451] Took 2.681261 seconds to extract the tarball
	I1218 23:43:25.735264  845164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 23:43:25.903745  845164 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:43:25.945422  845164 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1218 23:43:25.945447  845164 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 23:43:25.945514  845164 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:43:25.945735  845164 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:43:25.945815  845164 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:43:25.945887  845164 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:43:25.946063  845164 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:43:25.946148  845164 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1218 23:43:25.946209  845164 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:43:25.946280  845164 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1218 23:43:25.947074  845164 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:43:25.947522  845164 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:43:25.947983  845164 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1218 23:43:25.948087  845164 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:43:25.948300  845164 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:43:25.949075  845164 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:43:25.949335  845164 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1218 23:43:25.949373  845164 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W1218 23:43:26.304650  845164 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.304935  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1218 23:43:26.316785  845164 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.317150  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1218 23:43:26.321252  845164 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.321503  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1218 23:43:26.324728  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1218 23:43:26.338678  845164 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.338880  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1218 23:43:26.343884  845164 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.344132  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1218 23:43:26.375910  845164 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.376148  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:43:26.442743  845164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1218 23:43:26.442810  845164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:43:26.442872  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.501258  845164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1218 23:43:26.501316  845164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:43:26.501369  845164 ssh_runner.go:195] Run: which crictl
	W1218 23:43:26.507292  845164 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 23:43:26.507464  845164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:43:26.544864  845164 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1218 23:43:26.544950  845164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1218 23:43:26.545048  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.545166  845164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1218 23:43:26.545200  845164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1218 23:43:26.545239  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.567670  845164 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1218 23:43:26.567747  845164 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1218 23:43:26.567830  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.567965  845164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1218 23:43:26.568002  845164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:43:26.568039  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.568133  845164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1218 23:43:26.568170  845164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:43:26.568213  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.568313  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1218 23:43:26.568393  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 23:43:26.715783  845164 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1218 23:43:26.715869  845164 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:43:26.715905  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1218 23:43:26.715966  845164 ssh_runner.go:195] Run: which crictl
	I1218 23:43:26.716013  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1218 23:43:26.716139  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1218 23:43:26.716199  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1218 23:43:26.716239  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1218 23:43:26.716212  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1218 23:43:26.716301  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1218 23:43:26.856575  845164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:43:26.856655  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1218 23:43:26.856722  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1218 23:43:26.856761  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1218 23:43:26.856802  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1218 23:43:26.856848  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1218 23:43:26.910597  845164 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 23:43:26.910684  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 -> /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:43:26.910791  845164 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:43:26.915067  845164 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1218 23:43:26.915144  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1218 23:43:26.993872  845164 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:43:26.994014  845164 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1218 23:43:27.546184  845164 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1218 23:43:27.546240  845164 cache_images.go:92] LoadImages completed in 1.600780398s
	W1218 23:43:27.546320  845164 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1218 23:43:27.546402  845164 ssh_runner.go:195] Run: crio config
	I1218 23:43:27.603306  845164 cni.go:84] Creating CNI manager for ""
	I1218 23:43:27.603367  845164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:43:27.603412  845164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:43:27.603471  845164 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-715187 NodeName:ingress-addon-legacy-715187 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1218 23:43:27.603666  845164 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-715187"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:43:27.603784  845164 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-715187 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-715187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:43:27.603887  845164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1218 23:43:27.614941  845164 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:43:27.615061  845164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:43:27.625773  845164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1218 23:43:27.647487  845164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1218 23:43:27.669038  845164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1218 23:43:27.690865  845164 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:43:27.695495  845164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:43:27.709082  845164 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187 for IP: 192.168.49.2
	I1218 23:43:27.709155  845164 certs.go:190] acquiring lock for shared ca certs: {Name:mkb7306ae237ed30250289faa05e9a8d3ae56985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:27.709396  845164 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key
	I1218 23:43:27.709454  845164 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key
	I1218 23:43:27.709507  845164 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key
	I1218 23:43:27.709523  845164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt with IP's: []
	I1218 23:43:28.309604  845164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt ...
	I1218 23:43:28.309636  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: {Name:mk82d444a415f5af244e0320b9b71c5af8c5d80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:28.309835  845164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key ...
	I1218 23:43:28.309850  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key: {Name:mk48e2d99c6e76202f6c6b04edafa861f8ddcc28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:28.309947  845164 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key.dd3b5fb2
	I1218 23:43:28.309969  845164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:43:28.858974  845164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt.dd3b5fb2 ...
	I1218 23:43:28.859009  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt.dd3b5fb2: {Name:mk0372840ad723fd19b70df056dea754b0f9d871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:28.859182  845164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key.dd3b5fb2 ...
	I1218 23:43:28.859194  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key.dd3b5fb2: {Name:mkb4148b013b6aaf1465ed00aeb50d52498caaa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:28.859273  845164 certs.go:337] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt
	I1218 23:43:28.859345  845164 certs.go:341] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key
	I1218 23:43:28.859395  845164 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.key
	I1218 23:43:28.859410  845164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.crt with IP's: []
	I1218 23:43:29.639785  845164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.crt ...
	I1218 23:43:29.639820  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.crt: {Name:mke9cf0044af35f5f50063e7a4fcb184223d9253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:29.640018  845164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.key ...
	I1218 23:43:29.640034  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.key: {Name:mk2d64c288475879fae77cef7c9b8248f6f425ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:43:29.640117  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 23:43:29.640137  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 23:43:29.640153  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 23:43:29.640170  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 23:43:29.640189  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 23:43:29.640206  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 23:43:29.640225  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 23:43:29.640236  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 23:43:29.640292  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem (1338 bytes)
	W1218 23:43:29.640334  845164 certs.go:433] ignoring /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378_empty.pem, impossibly tiny 0 bytes
	I1218 23:43:29.640351  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:43:29.640377  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem (1078 bytes)
	I1218 23:43:29.640409  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:43:29.640442  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem (1679 bytes)
	I1218 23:43:29.640491  845164 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:43:29.640527  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:43:29.640542  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem -> /usr/share/ca-certificates/817378.pem
	I1218 23:43:29.640555  845164 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /usr/share/ca-certificates/8173782.pem
	I1218 23:43:29.641159  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:43:29.671014  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 23:43:29.700667  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:43:29.730164  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 23:43:29.759203  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:43:29.788637  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 23:43:29.817168  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:43:29.846061  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 23:43:29.874984  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:43:29.904239  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem --> /usr/share/ca-certificates/817378.pem (1338 bytes)
	I1218 23:43:29.933205  845164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /usr/share/ca-certificates/8173782.pem (1708 bytes)
	I1218 23:43:29.962276  845164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:43:29.983802  845164 ssh_runner.go:195] Run: openssl version
	I1218 23:43:29.991097  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:43:30.022504  845164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:43:30.043782  845164 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:43:30.043943  845164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:43:30.054877  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:43:30.070227  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/817378.pem && ln -fs /usr/share/ca-certificates/817378.pem /etc/ssl/certs/817378.pem"
	I1218 23:43:30.084225  845164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/817378.pem
	I1218 23:43:30.089765  845164 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 23:39 /usr/share/ca-certificates/817378.pem
	I1218 23:43:30.089846  845164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/817378.pem
	I1218 23:43:30.099347  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/817378.pem /etc/ssl/certs/51391683.0"
	I1218 23:43:30.113786  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8173782.pem && ln -fs /usr/share/ca-certificates/8173782.pem /etc/ssl/certs/8173782.pem"
	I1218 23:43:30.128925  845164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8173782.pem
	I1218 23:43:30.134418  845164 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 23:39 /usr/share/ca-certificates/8173782.pem
	I1218 23:43:30.134492  845164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8173782.pem
	I1218 23:43:30.144878  845164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8173782.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 23:43:30.158555  845164 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:43:30.163865  845164 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:43:30.163979  845164 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-715187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-715187 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:43:30.164091  845164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1218 23:43:30.164160  845164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:43:30.210160  845164 cri.go:89] found id: ""
	I1218 23:43:30.210247  845164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:43:30.222502  845164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:43:30.234105  845164 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:43:30.234174  845164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:43:30.245951  845164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:43:30.245999  845164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:43:30.302596  845164 kubeadm.go:322] W1218 23:43:30.302070    1256 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1218 23:43:30.355591  845164 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:43:30.440652  845164 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:43:36.964614  845164 kubeadm.go:322] W1218 23:43:36.964308    1256 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 23:43:36.971438  845164 kubeadm.go:322] W1218 23:43:36.971164    1256 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 23:43:49.964255  845164 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1218 23:43:49.964313  845164 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:43:49.964395  845164 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:43:49.964456  845164 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:43:49.964499  845164 kubeadm.go:322] OS: Linux
	I1218 23:43:49.964542  845164 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:43:49.964587  845164 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:43:49.964631  845164 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:43:49.964676  845164 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:43:49.964721  845164 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:43:49.964765  845164 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:43:49.964833  845164 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:43:49.964921  845164 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:43:49.965007  845164 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:43:49.965102  845164 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:43:49.965181  845164 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:43:49.965217  845164 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:43:49.965277  845164 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:43:49.967294  845164 out.go:204]   - Generating certificates and keys ...
	I1218 23:43:49.967373  845164 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:43:49.967434  845164 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:43:49.967496  845164 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:43:49.967554  845164 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:43:49.967610  845164 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:43:49.967656  845164 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:43:49.967706  845164 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:43:49.967834  845164 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-715187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:43:49.967883  845164 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:43:49.968095  845164 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-715187 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 23:43:49.968185  845164 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:43:49.968263  845164 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:43:49.968312  845164 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:43:49.968372  845164 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:43:49.968440  845164 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:43:49.968497  845164 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:43:49.968564  845164 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:43:49.968621  845164 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:43:49.968693  845164 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:43:49.970760  845164 out.go:204]   - Booting up control plane ...
	I1218 23:43:49.970855  845164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:43:49.970933  845164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:43:49.970994  845164 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:43:49.971070  845164 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:43:49.971211  845164 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:43:49.971283  845164 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502988 seconds
	I1218 23:43:49.971380  845164 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:43:49.971500  845164 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:43:49.971553  845164 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:43:49.971678  845164 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-715187 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1218 23:43:49.971730  845164 kubeadm.go:322] [bootstrap-token] Using token: uqyjjc.rqwl9dqp5qyayta3
	I1218 23:43:49.973389  845164 out.go:204]   - Configuring RBAC rules ...
	I1218 23:43:49.973508  845164 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:43:49.973593  845164 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:43:49.973726  845164 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:43:49.973848  845164 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:43:49.973958  845164 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:43:49.974040  845164 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:43:49.974151  845164 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:43:49.974206  845164 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:43:49.974252  845164 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:43:49.974260  845164 kubeadm.go:322] 
	I1218 23:43:49.974315  845164 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:43:49.974323  845164 kubeadm.go:322] 
	I1218 23:43:49.974394  845164 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:43:49.974402  845164 kubeadm.go:322] 
	I1218 23:43:49.974426  845164 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:43:49.974483  845164 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:43:49.974534  845164 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:43:49.974542  845164 kubeadm.go:322] 
	I1218 23:43:49.974593  845164 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:43:49.974666  845164 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:43:49.974733  845164 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:43:49.974741  845164 kubeadm.go:322] 
	I1218 23:43:49.974819  845164 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:43:49.974893  845164 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:43:49.974901  845164 kubeadm.go:322] 
	I1218 23:43:49.974979  845164 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uqyjjc.rqwl9dqp5qyayta3 \
	I1218 23:43:49.975081  845164 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c \
	I1218 23:43:49.975107  845164 kubeadm.go:322]     --control-plane 
	I1218 23:43:49.975114  845164 kubeadm.go:322] 
	I1218 23:43:49.975193  845164 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:43:49.975245  845164 kubeadm.go:322] 
	I1218 23:43:49.975348  845164 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uqyjjc.rqwl9dqp5qyayta3 \
	I1218 23:43:49.975503  845164 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c 
	I1218 23:43:49.975528  845164 cni.go:84] Creating CNI manager for ""
	I1218 23:43:49.975548  845164 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:43:49.978812  845164 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:43:49.980698  845164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:43:49.985889  845164 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1218 23:43:49.985919  845164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:43:50.020160  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:43:50.527212  845164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:43:50.527301  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:50.527333  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=ingress-addon-legacy-715187 minikube.k8s.io/updated_at=2023_12_18T23_43_50_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:50.697688  845164 ops.go:34] apiserver oom_adj: -16
	I1218 23:43:50.697807  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:51.198499  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:51.698606  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:52.198800  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:52.698250  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:53.198641  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:53.697943  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:54.198623  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:54.698115  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:55.197955  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:55.698453  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:56.198716  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:56.698800  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:57.197892  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:57.698775  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:58.198248  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:58.697961  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:59.198495  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:43:59.698902  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:00.198220  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:00.698679  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:01.198012  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:01.698619  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:02.198573  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:02.698680  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:03.198390  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:03.698694  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:04.198673  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:04.698332  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:05.198593  845164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:44:05.320095  845164 kubeadm.go:1088] duration metric: took 14.79286218s to wait for elevateKubeSystemPrivileges.
	I1218 23:44:05.320132  845164 kubeadm.go:406] StartCluster complete in 35.156199188s
	I1218 23:44:05.320150  845164 settings.go:142] acquiring lock: {Name:mkb4ce0a07455c74d828d76d071a3ad023516aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:44:05.320215  845164 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:44:05.320963  845164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/kubeconfig: {Name:mk19de5f3e7863c913095f8f2b91ab4519f12535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:44:05.321694  845164 kapi.go:59] client config for ingress-addon-legacy-715187: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:44:05.322970  845164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:44:05.323238  845164 config.go:182] Loaded profile config "ingress-addon-legacy-715187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1218 23:44:05.323277  845164 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 23:44:05.323348  845164 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-715187"
	I1218 23:44:05.323362  845164 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-715187"
	I1218 23:44:05.323400  845164 host.go:66] Checking if "ingress-addon-legacy-715187" exists ...
	I1218 23:44:05.323901  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:44:05.324386  845164 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 23:44:05.324421  845164 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-715187"
	I1218 23:44:05.324434  845164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-715187"
	I1218 23:44:05.326125  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:44:05.366254  845164 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:44:05.369497  845164 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:44:05.369520  845164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:44:05.369585  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:44:05.399978  845164 kapi.go:59] client config for ingress-addon-legacy-715187: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:44:05.400251  845164 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-715187"
	I1218 23:44:05.400294  845164 host.go:66] Checking if "ingress-addon-legacy-715187" exists ...
	I1218 23:44:05.400787  845164 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-715187 --format={{.State.Status}}
	I1218 23:44:05.424711  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:44:05.452165  845164 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:44:05.452189  845164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:44:05.452263  845164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-715187
	I1218 23:44:05.492278  845164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/ingress-addon-legacy-715187/id_rsa Username:docker}
	I1218 23:44:05.643353  845164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:44:05.661016  845164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:44:05.706170  845164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:44:05.826083  845164 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-715187" context rescaled to 1 replicas
	I1218 23:44:05.826175  845164 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:44:05.828233  845164 out.go:177] * Verifying Kubernetes components...
	I1218 23:44:05.829864  845164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:44:06.082048  845164 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 23:44:06.407740  845164 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1218 23:44:06.406391  845164 kapi.go:59] client config for ingress-addon-legacy-715187: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:44:06.409636  845164 addons.go:502] enable addons completed in 1.086352101s: enabled=[default-storageclass storage-provisioner]
	I1218 23:44:06.408187  845164 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-715187" to be "Ready" ...
	I1218 23:44:08.412477  845164 node_ready.go:58] node "ingress-addon-legacy-715187" has status "Ready":"False"
	I1218 23:44:10.412616  845164 node_ready.go:58] node "ingress-addon-legacy-715187" has status "Ready":"False"
	I1218 23:44:12.913214  845164 node_ready.go:58] node "ingress-addon-legacy-715187" has status "Ready":"False"
	I1218 23:44:13.421350  845164 node_ready.go:49] node "ingress-addon-legacy-715187" has status "Ready":"True"
	I1218 23:44:13.421423  845164 node_ready.go:38] duration metric: took 7.011755894s waiting for node "ingress-addon-legacy-715187" to be "Ready" ...
	I1218 23:44:13.421449  845164 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:44:13.445711  845164 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:15.449460  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-18 23:44:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1218 23:44:17.948937  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-18 23:44:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1218 23:44:20.448994  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-12-18 23:44:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1218 23:44:22.451127  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace has status "Ready":"False"
	I1218 23:44:24.951452  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace has status "Ready":"False"
	I1218 23:44:26.952381  845164 pod_ready.go:102] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace has status "Ready":"False"
	I1218 23:44:27.455226  845164 pod_ready.go:92] pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.455253  845164 pod_ready.go:81] duration metric: took 14.009461519s waiting for pod "coredns-66bff467f8-hpq6c" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.455265  845164 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.465528  845164 pod_ready.go:92] pod "etcd-ingress-addon-legacy-715187" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.465557  845164 pod_ready.go:81] duration metric: took 10.284718ms waiting for pod "etcd-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.465572  845164 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.475103  845164 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-715187" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.475132  845164 pod_ready.go:81] duration metric: took 9.55194ms waiting for pod "kube-apiserver-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.475150  845164 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.483896  845164 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-715187" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.483918  845164 pod_ready.go:81] duration metric: took 8.760037ms waiting for pod "kube-controller-manager-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.483929  845164 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2zdrz" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.489881  845164 pod_ready.go:92] pod "kube-proxy-2zdrz" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.489907  845164 pod_ready.go:81] duration metric: took 5.963221ms waiting for pod "kube-proxy-2zdrz" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.489926  845164 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.646224  845164 request.go:629] Waited for 156.174891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-715187
	I1218 23:44:27.846739  845164 request.go:629] Waited for 197.950529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-715187
	I1218 23:44:27.849561  845164 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-715187" in "kube-system" namespace has status "Ready":"True"
	I1218 23:44:27.849586  845164 pod_ready.go:81] duration metric: took 359.651238ms waiting for pod "kube-scheduler-ingress-addon-legacy-715187" in "kube-system" namespace to be "Ready" ...
	I1218 23:44:27.849621  845164 pod_ready.go:38] duration metric: took 14.428119504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:44:27.849645  845164 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:44:27.849744  845164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:44:27.862697  845164 api_server.go:72] duration metric: took 22.036465074s to wait for apiserver process to appear ...
	I1218 23:44:27.862722  845164 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:44:27.862745  845164 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 23:44:27.871553  845164 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 23:44:27.872482  845164 api_server.go:141] control plane version: v1.18.20
	I1218 23:44:27.872508  845164 api_server.go:131] duration metric: took 9.77986ms to wait for apiserver health ...
	I1218 23:44:27.872517  845164 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:44:28.046914  845164 request.go:629] Waited for 174.334189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:44:28.052833  845164 system_pods.go:59] 8 kube-system pods found
	I1218 23:44:28.052868  845164 system_pods.go:61] "coredns-66bff467f8-hpq6c" [ddfe3412-5834-4f17-9740-2ec68f99bb6f] Running
	I1218 23:44:28.052875  845164 system_pods.go:61] "etcd-ingress-addon-legacy-715187" [98b966e7-abfc-4317-aa60-fe68df46cb6a] Running
	I1218 23:44:28.052881  845164 system_pods.go:61] "kindnet-95r92" [5429bbdd-e983-460d-870a-cc9b0ead2cc4] Running
	I1218 23:44:28.052887  845164 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-715187" [9e039256-0b27-428c-876e-99d240fa9a84] Running
	I1218 23:44:28.052900  845164 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-715187" [64a63a39-79bf-4e0c-8dc2-e84d1662d30f] Running
	I1218 23:44:28.052911  845164 system_pods.go:61] "kube-proxy-2zdrz" [e94ab071-77e8-4b7c-82dd-2238f261d72b] Running
	I1218 23:44:28.052917  845164 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-715187" [0a66e624-a898-4897-bf44-762d160daad4] Running
	I1218 23:44:28.052924  845164 system_pods.go:61] "storage-provisioner" [5edbddd9-7b00-4aca-9084-b952973b2998] Running
	I1218 23:44:28.052930  845164 system_pods.go:74] duration metric: took 180.408046ms to wait for pod list to return data ...
	I1218 23:44:28.052942  845164 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:44:28.246194  845164 request.go:629] Waited for 193.172487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1218 23:44:28.248585  845164 default_sa.go:45] found service account: "default"
	I1218 23:44:28.248615  845164 default_sa.go:55] duration metric: took 195.664625ms for default service account to be created ...
	I1218 23:44:28.248626  845164 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:44:28.447128  845164 request.go:629] Waited for 198.440043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:44:28.452965  845164 system_pods.go:86] 8 kube-system pods found
	I1218 23:44:28.452997  845164 system_pods.go:89] "coredns-66bff467f8-hpq6c" [ddfe3412-5834-4f17-9740-2ec68f99bb6f] Running
	I1218 23:44:28.453004  845164 system_pods.go:89] "etcd-ingress-addon-legacy-715187" [98b966e7-abfc-4317-aa60-fe68df46cb6a] Running
	I1218 23:44:28.453010  845164 system_pods.go:89] "kindnet-95r92" [5429bbdd-e983-460d-870a-cc9b0ead2cc4] Running
	I1218 23:44:28.453015  845164 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-715187" [9e039256-0b27-428c-876e-99d240fa9a84] Running
	I1218 23:44:28.453021  845164 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-715187" [64a63a39-79bf-4e0c-8dc2-e84d1662d30f] Running
	I1218 23:44:28.453026  845164 system_pods.go:89] "kube-proxy-2zdrz" [e94ab071-77e8-4b7c-82dd-2238f261d72b] Running
	I1218 23:44:28.453051  845164 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-715187" [0a66e624-a898-4897-bf44-762d160daad4] Running
	I1218 23:44:28.453063  845164 system_pods.go:89] "storage-provisioner" [5edbddd9-7b00-4aca-9084-b952973b2998] Running
	I1218 23:44:28.453075  845164 system_pods.go:126] duration metric: took 204.43761ms to wait for k8s-apps to be running ...
	I1218 23:44:28.453082  845164 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:44:28.453182  845164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:44:28.466802  845164 system_svc.go:56] duration metric: took 13.706947ms WaitForService to wait for kubelet.
	I1218 23:44:28.466880  845164 kubeadm.go:581] duration metric: took 22.640655195s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:44:28.466904  845164 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:44:28.646167  845164 request.go:629] Waited for 179.178732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1218 23:44:28.649068  845164 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:44:28.649103  845164 node_conditions.go:123] node cpu capacity is 2
	I1218 23:44:28.649114  845164 node_conditions.go:105] duration metric: took 182.204584ms to run NodePressure ...
	I1218 23:44:28.649129  845164 start.go:228] waiting for startup goroutines ...
	I1218 23:44:28.649136  845164 start.go:233] waiting for cluster config update ...
	I1218 23:44:28.649152  845164 start.go:242] writing updated cluster config ...
	I1218 23:44:28.649426  845164 ssh_runner.go:195] Run: rm -f paused
	I1218 23:44:28.709174  845164 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1218 23:44:28.711362  845164 out.go:177] 
	W1218 23:44:28.713107  845164 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1218 23:44:28.714629  845164 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1218 23:44:28.716367  845164 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-715187" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 18 23:47:29 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:29.361906385Z" level=info msg="Removing container: 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309" id=ce5b31b2-9daf-4629-bb7d-dd281b6d4182 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Dec 18 23:47:29 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:29.377725330Z" level=info msg="Stopping pod sandbox: 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=2fa79b45-43c5-4729-b4a1-cfd3bdd72dff name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:29 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:29.383034797Z" level=info msg="Removed container 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309: default/hello-world-app-5f5d8b66bb-c4sjz/hello-world-app" id=ce5b31b2-9daf-4629-bb7d-dd281b6d4182 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Dec 18 23:47:29 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:29.383908955Z" level=info msg="Stopped pod sandbox: 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=2fa79b45-43c5-4729-b4a1-cfd3bdd72dff name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:30 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:30.375375781Z" level=info msg="Stopping pod sandbox: 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=61e350b5-07be-4f71-b4c5-e889469d5fab name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:30 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:30.375421442Z" level=info msg="Stopped pod sandbox (already stopped): 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=61e350b5-07be-4f71-b4c5-e889469d5fab name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:31 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:31.296696528Z" level=info msg="Stopping container: c6cba7a6a61aa56301aeeaffad15b38ed6fc3fbd2a746d6ae4a8df8a2455df00 (timeout: 2s)" id=ee0536ff-8ba3-4513-81c2-257961ff94a6 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 18 23:47:31 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:31.305217921Z" level=info msg="Stopping container: c6cba7a6a61aa56301aeeaffad15b38ed6fc3fbd2a746d6ae4a8df8a2455df00 (timeout: 2s)" id=9dcb0b3e-2a67-4d2b-8b33-fac587647c6b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 18 23:47:31 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:31.374602307Z" level=info msg="Stopping pod sandbox: 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=2f6f7f8c-b317-450c-a411-98fa3081faaf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:31 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:31.374646975Z" level=info msg="Stopped pod sandbox (already stopped): 740175035e898ab3e17d232e20d0176133c73879a1f2aafbe3f6c067e4e9228e" id=2f6f7f8c-b317-450c-a411-98fa3081faaf name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.313569923Z" level=warning msg="Stopping container c6cba7a6a61aa56301aeeaffad15b38ed6fc3fbd2a746d6ae4a8df8a2455df00 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ee0536ff-8ba3-4513-81c2-257961ff94a6 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 18 23:47:33 ingress-addon-legacy-715187 conmon[2721]: conmon c6cba7a6a61aa56301ae <ninfo>: container 2732 exited with status 137
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.481142467Z" level=info msg="Stopped container c6cba7a6a61aa56301aeeaffad15b38ed6fc3fbd2a746d6ae4a8df8a2455df00: ingress-nginx/ingress-nginx-controller-7fcf777cb7-vpjkq/controller" id=9dcb0b3e-2a67-4d2b-8b33-fac587647c6b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.483285143Z" level=info msg="Stopped container c6cba7a6a61aa56301aeeaffad15b38ed6fc3fbd2a746d6ae4a8df8a2455df00: ingress-nginx/ingress-nginx-controller-7fcf777cb7-vpjkq/controller" id=ee0536ff-8ba3-4513-81c2-257961ff94a6 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.483489105Z" level=info msg="Stopping pod sandbox: bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab" id=7048e4c3-64ee-4459-9cec-926e8506415c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.486858995Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-BLLJSNUYVSOUEZAZ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-QI7UWPTUQR6VZIYF - [0:0]\n-X KUBE-HP-QI7UWPTUQR6VZIYF\n-X KUBE-HP-BLLJSNUYVSOUEZAZ\nCOMMIT\n"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.495941942Z" level=info msg="Stopping pod sandbox: bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab" id=a4b3a0a5-1713-455b-827e-7bc9ebba9faa name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.496755998Z" level=info msg="Closing host port tcp:80"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.496811030Z" level=info msg="Closing host port tcp:443"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.498091453Z" level=info msg="Host port tcp:80 does not have an open socket"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.498114846Z" level=info msg="Host port tcp:443 does not have an open socket"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.498257138Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-vpjkq Namespace:ingress-nginx ID:bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab UID:7044e932-f0f4-4432-96c6-63d266c8faff NetNS:/var/run/netns/d1549ca3-1b90-4ea1-92e6-8264e8901ff1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.498401752Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-vpjkq from CNI network \"kindnet\" (type=ptp)"
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.521466089Z" level=info msg="Stopped pod sandbox: bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab" id=7048e4c3-64ee-4459-9cec-926e8506415c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Dec 18 23:47:33 ingress-addon-legacy-715187 crio[898]: time="2023-12-18 23:47:33.521576652Z" level=info msg="Stopped pod sandbox (already stopped): bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab" id=a4b3a0a5-1713-455b-827e-7bc9ebba9faa name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c198c40a97d16       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   10 seconds ago      Exited              hello-world-app           2                   80a5ba9d63398       hello-world-app-5f5d8b66bb-c4sjz
	67a51c4dd1214       docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7                    2 minutes ago       Running             nginx                     0                   7d4d6374dc7c1       nginx
	c6cba7a6a61aa       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   bc1e956dd0db1       ingress-nginx-controller-7fcf777cb7-vpjkq
	0e46961118779       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   53b25a7f1cf30       ingress-nginx-admission-patch-hw4gz
	9595ceeae3bab       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   b23fae285a1a1       ingress-nginx-admission-create-fghts
	a50246c21523f       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   042286426e677       coredns-66bff467f8-hpq6c
	92fd871949318       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                                   3 minutes ago       Running             storage-provisioner       0                   81f85d7e780fb       storage-provisioner
	1fee4d33b9814       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   f0e2e32f5b316       kindnet-95r92
	aa18ff9f73774       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   d9680652728b6       kube-proxy-2zdrz
	f8f060bf2a3fc       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   f1f582b984cdb       kube-apiserver-ingress-addon-legacy-715187
	6f6b359a13c67       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   fd89c731eaa9e       kube-scheduler-ingress-addon-legacy-715187
	580f1c3a44dda       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   faccf6fa50567       etcd-ingress-addon-legacy-715187
	fd51473812a95       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   90ff0f4ce1c2c       kube-controller-manager-ingress-addon-legacy-715187
	
	* 
	* ==> coredns [a50246c21523fed581f717c19c495872ee199d7a6170ee85a85cf6e9c19a59a7] <==
	* [INFO] 10.244.0.5:55406 - 8581 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045645s
	[INFO] 10.244.0.5:55406 - 30762 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000772776s
	[INFO] 10.244.0.5:56146 - 17299 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002107961s
	[INFO] 10.244.0.5:56146 - 63472 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003090417s
	[INFO] 10.244.0.5:55406 - 33987 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003670695s
	[INFO] 10.244.0.5:55406 - 46750 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135253s
	[INFO] 10.244.0.5:56146 - 63405 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000174038s
	[INFO] 10.244.0.5:36926 - 9353 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117201s
	[INFO] 10.244.0.5:42149 - 2072 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034847s
	[INFO] 10.244.0.5:36926 - 58715 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044726s
	[INFO] 10.244.0.5:36926 - 8757 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036521s
	[INFO] 10.244.0.5:36926 - 6466 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035085s
	[INFO] 10.244.0.5:36926 - 46167 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038465s
	[INFO] 10.244.0.5:36926 - 48834 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035651s
	[INFO] 10.244.0.5:42149 - 53109 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029964s
	[INFO] 10.244.0.5:42149 - 55608 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000318052s
	[INFO] 10.244.0.5:36926 - 7285 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001288834s
	[INFO] 10.244.0.5:42149 - 47114 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000552438s
	[INFO] 10.244.0.5:42149 - 16282 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053013s
	[INFO] 10.244.0.5:42149 - 32517 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004338s
	[INFO] 10.244.0.5:42149 - 3755 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001968138s
	[INFO] 10.244.0.5:36926 - 15385 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001509951s
	[INFO] 10.244.0.5:36926 - 40145 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081936s
	[INFO] 10.244.0.5:42149 - 15611 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001317904s
	[INFO] 10.244.0.5:42149 - 38645 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060562s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-715187
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-715187
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=ingress-addon-legacy-715187
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_43_50_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:43:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-715187
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:47:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:47:23 +0000   Mon, 18 Dec 2023 23:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:47:23 +0000   Mon, 18 Dec 2023 23:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:47:23 +0000   Mon, 18 Dec 2023 23:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:47:23 +0000   Mon, 18 Dec 2023 23:44:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-715187
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 f3ed4482a62b4d51bac8c381856506df
	  System UUID:                9f2cc2af-a0c7-4800-95f2-596177aa56ac
	  Boot ID:                    a58889d6-3937-44de-bde4-55a8fc7b5b88
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-c4sjz                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-hpq6c                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m35s
	  kube-system                 etcd-ingress-addon-legacy-715187                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kindnet-95r92                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m35s
	  kube-system                 kube-apiserver-ingress-addon-legacy-715187             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-715187    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-proxy-2zdrz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-scheduler-ingress-addon-legacy-715187             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From        Message
	  ----    ------                   ----             ----        -------
	  Normal  NodeHasSufficientMemory  4m (x5 over 4m)  kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x4 over 4m)  kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x4 over 4m)  kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m46s            kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s            kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s            kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s            kubelet     Node ingress-addon-legacy-715187 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m33s            kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m26s            kubelet     Node ingress-addon-legacy-715187 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001100] FS-Cache: O-key=[8] 'ccd3c90000000000'
	[  +0.000777] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001007] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=0000000023100663
	[  +0.001130] FS-Cache: N-key=[8] 'ccd3c90000000000'
	[  +0.002924] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001006] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=000000002f76f87c
	[  +0.001129] FS-Cache: O-key=[8] 'ccd3c90000000000'
	[  +0.001007] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001069] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000dda31a8a
	[  +0.001229] FS-Cache: N-key=[8] 'ccd3c90000000000'
	[  +2.393317] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000e43fe467
	[  +0.001091] FS-Cache: O-key=[8] 'cbd3c90000000000'
	[  +0.000775] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=0000000023100663
	[  +0.001106] FS-Cache: N-key=[8] 'cbd3c90000000000'
	[  +0.391545] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000a91c1b1c
	[  +0.001117] FS-Cache: O-key=[8] 'd1d3c90000000000'
	[  +0.000731] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=000000001117d976
	[  +0.001087] FS-Cache: N-key=[8] 'd1d3c90000000000'
	
	* 
	* ==> etcd [580f1c3a44dda7b70072aa7d1722eeba228cfe34061bfc2d560a5bbc28dc62fa] <==
	* raft2023/12/18 23:43:41 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/18 23:43:41 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 23:43:41.138635 W | auth: simple token is not cryptographically signed
	2023-12-18 23:43:41.155055 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-18 23:43:41.164520 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-18 23:43:41.165055 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-18 23:43:41.165446 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 23:43:41.165877 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-12-18 23:43:41.166349 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/18 23:43:41 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/18 23:43:41 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-18 23:43:41.904555 I | etcdserver: published {Name:ingress-addon-legacy-715187 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-18 23:43:41.904744 I | embed: ready to serve client requests
	2023-12-18 23:43:41.905146 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-18 23:43:41.905406 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-18 23:43:41.905496 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-18 23:43:41.908018 I | embed: ready to serve client requests
	2023-12-18 23:43:41.909284 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-18 23:43:42.249077 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  23:47:39 up  4:30,  0 users,  load average: 1.16, 1.37, 2.02
	Linux ingress-addon-legacy-715187 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1fee4d33b9814889a9517ae1d69d34e6eab567cfaed356811dd3a0f4a2a664de] <==
	* I1218 23:45:38.296698       1 main.go:227] handling current node
	I1218 23:45:48.300695       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:45:48.300723       1 main.go:227] handling current node
	I1218 23:45:58.304103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:45:58.304130       1 main.go:227] handling current node
	I1218 23:46:08.307717       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:08.307749       1 main.go:227] handling current node
	I1218 23:46:18.319045       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:18.319069       1 main.go:227] handling current node
	I1218 23:46:28.330547       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:28.330575       1 main.go:227] handling current node
	I1218 23:46:38.340681       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:38.340708       1 main.go:227] handling current node
	I1218 23:46:48.352847       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:48.352876       1 main.go:227] handling current node
	I1218 23:46:58.356051       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:46:58.356076       1 main.go:227] handling current node
	I1218 23:47:08.359291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:47:08.359318       1 main.go:227] handling current node
	I1218 23:47:18.367934       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:47:18.367994       1 main.go:227] handling current node
	I1218 23:47:28.381087       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:47:28.381113       1 main.go:227] handling current node
	I1218 23:47:38.391419       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1218 23:47:38.391605       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [f8f060bf2a3fc741209a7d93d0eafb739552cd22a3e872dc9112224a5d1d6934] <==
	* I1218 23:43:46.976872       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1218 23:43:47.078800       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1218 23:43:47.091145       1 cache.go:39] Caches are synced for autoregister controller
	I1218 23:43:47.091570       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1218 23:43:47.180351       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1218 23:43:47.181886       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1218 23:43:47.876738       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1218 23:43:47.876766       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1218 23:43:47.884731       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1218 23:43:47.889178       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1218 23:43:47.889196       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1218 23:43:48.288763       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 23:43:48.331362       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1218 23:43:48.405908       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1218 23:43:48.407129       1 controller.go:609] quota admission added evaluator for: endpoints
	I1218 23:43:48.414028       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 23:43:49.344624       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1218 23:43:49.812988       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1218 23:43:49.941359       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1218 23:43:53.217101       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 23:44:04.782837       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1218 23:44:04.821392       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1218 23:44:29.588121       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1218 23:44:52.372912       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1218 23:47:30.376603       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400589bc80), encoder:(*versioning.codec)(0x4008523040), buf:(*bytes.Buffer)(0x4009b001e0)})
	
	* 
	* ==> kube-controller-manager [fd51473812a95ea55218abeaaedb24a8523df9b19409b4b205242fb614659722] <==
	* I1218 23:44:05.164603       1 shared_informer.go:230] Caches are synced for PV protection 
	I1218 23:44:05.218769       1 shared_informer.go:230] Caches are synced for HPA 
	I1218 23:44:05.248213       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1218 23:44:05.346003       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1218 23:44:05.346860       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1218 23:44:05.354865       1 shared_informer.go:230] Caches are synced for attach detach 
	I1218 23:44:05.362524       1 shared_informer.go:230] Caches are synced for expand 
	I1218 23:44:05.368290       1 shared_informer.go:230] Caches are synced for resource quota 
	I1218 23:44:05.394015       1 shared_informer.go:230] Caches are synced for stateful set 
	I1218 23:44:05.395633       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 23:44:05.398763       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 23:44:05.398841       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1218 23:44:05.399132       1 shared_informer.go:230] Caches are synced for resource quota 
	I1218 23:44:05.478412       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8765f585-0bfd-41cf-b18f-8f89eb4632cd", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1218 23:44:05.596578       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"7ee6ea0b-1c21-4cd0-ac0f-3e9978617ea9", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-pkdd8
	I1218 23:44:14.836798       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1218 23:44:29.595376       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"73f5624d-ed07-4123-9050-495696cbd2e9", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1218 23:44:29.607511       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"62d55e98-d0f7-4377-bf05-af8225509538", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-fghts
	I1218 23:44:29.625664       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1e01c24c-b107-4188-9960-c7e85dba1541", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-vpjkq
	I1218 23:44:29.661419       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"50dde50d-b4fb-4a9f-9523-b3bc7c71cf66", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hw4gz
	I1218 23:44:32.638986       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"62d55e98-d0f7-4377-bf05-af8225509538", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 23:44:32.659134       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"50dde50d-b4fb-4a9f-9523-b3bc7c71cf66", APIVersion:"batch/v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 23:47:12.373599       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"55a12ea2-1ed0-4df4-8e6a-8314ec48a1a0", APIVersion:"apps/v1", ResourceVersion:"718", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1218 23:47:12.387499       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"104b153c-3937-4cca-bbf2-8d07ff5cf992", APIVersion:"apps/v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-c4sjz
	E1218 23:47:35.972227       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-hjhvn" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [aa18ff9f73774a5f6fabeee52b37d781e2d09335a7bfeef55699967aa342c96c] <==
	* W1218 23:44:06.107307       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1218 23:44:06.138454       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1218 23:44:06.138617       1 server_others.go:186] Using iptables Proxier.
	I1218 23:44:06.139866       1 server.go:583] Version: v1.18.20
	I1218 23:44:06.141932       1 config.go:315] Starting service config controller
	I1218 23:44:06.141973       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1218 23:44:06.143098       1 config.go:133] Starting endpoints config controller
	I1218 23:44:06.143968       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1218 23:44:06.246806       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1218 23:44:06.247035       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6f6b359a13c677ad57a70dd84b1b6c5f0be02a5b890f9e64f52dc1485c59dc84] <==
	* W1218 23:43:47.068855       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1218 23:43:47.068983       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 23:43:47.069028       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1218 23:43:47.069066       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1218 23:43:47.108798       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 23:43:47.108863       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 23:43:47.111415       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1218 23:43:47.111734       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:43:47.111759       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 23:43:47.111794       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1218 23:43:47.132463       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:43:47.132627       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:43:47.132735       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 23:43:47.132816       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 23:43:47.132911       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:43:47.133025       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 23:43:47.133102       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 23:43:47.133177       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:43:47.133248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 23:43:47.133309       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 23:43:47.133380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:43:47.138663       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 23:43:47.976200       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:43:48.110760       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1218 23:43:48.811917       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 18 23:47:16 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:16.339537    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c337cacf0152f35216a8b0ae57efbbbee824e2d835c574cdcf7daee7f56dbb3e
	Dec 18 23:47:16 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:16.339668    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309
	Dec 18 23:47:16 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:16.339930    1632 pod_workers.go:191] Error syncing pod e9872a35-ac6d-4b51-9e4d-658695710cba ("hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"
	Dec 18 23:47:17 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:17.342461    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309
	Dec 18 23:47:17 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:17.342715    1632 pod_workers.go:191] Error syncing pod e9872a35-ac6d-4b51-9e4d-658695710cba ("hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"
	Dec 18 23:47:20 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:20.375363    1632 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 18 23:47:20 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:20.375414    1632 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 18 23:47:20 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:20.375465    1632 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Dec 18 23:47:20 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:20.375496    1632 pod_workers.go:191] Error syncing pod f2f6cf88-cfd4-4e97-b652-52be508ff040 ("kube-ingress-dns-minikube_kube-system(f2f6cf88-cfd4-4e97-b652-52be508ff040)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Dec 18 23:47:28 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:28.374338    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309
	Dec 18 23:47:28 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:28.468584    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-7qsrz" (UniqueName: "kubernetes.io/secret/f2f6cf88-cfd4-4e97-b652-52be508ff040-minikube-ingress-dns-token-7qsrz") pod "f2f6cf88-cfd4-4e97-b652-52be508ff040" (UID: "f2f6cf88-cfd4-4e97-b652-52be508ff040")
	Dec 18 23:47:28 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:28.476608    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2f6cf88-cfd4-4e97-b652-52be508ff040-minikube-ingress-dns-token-7qsrz" (OuterVolumeSpecName: "minikube-ingress-dns-token-7qsrz") pod "f2f6cf88-cfd4-4e97-b652-52be508ff040" (UID: "f2f6cf88-cfd4-4e97-b652-52be508ff040"). InnerVolumeSpecName "minikube-ingress-dns-token-7qsrz". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:47:28 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:28.568969    1632 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-7qsrz" (UniqueName: "kubernetes.io/secret/f2f6cf88-cfd4-4e97-b652-52be508ff040-minikube-ingress-dns-token-7qsrz") on node "ingress-addon-legacy-715187" DevicePath ""
	Dec 18 23:47:29 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:29.359922    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 00eeae2509793da6e8d9aacdf2ab77d697983f0c728f8c0d49c6ba90deefa309
	Dec 18 23:47:29 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:29.360431    1632 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c198c40a97d167b6c0cfdafbfa1161a79bbc69574afd748bed49a166c4419aec
	Dec 18 23:47:29 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:29.360698    1632 pod_workers.go:191] Error syncing pod e9872a35-ac6d-4b51-9e4d-658695710cba ("hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-c4sjz_default(e9872a35-ac6d-4b51-9e4d-658695710cba)"
	Dec 18 23:47:31 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:31.298911    1632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vpjkq.17a211dd4e26fb7b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vpjkq", UID:"7044e932-f0f4-4432-96c6-63d266c8faff", APIVersion:"v1", ResourceVersion:"496", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-715187"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1585464d1a5fd7b, ext:221528253663, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1585464d1a5fd7b, ext:221528253663, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vpjkq.17a211dd4e26fb7b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 23:47:31 ingress-addon-legacy-715187 kubelet[1632]: E1218 23:47:31.309980    1632 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-vpjkq.17a211dd4e26fb7b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-vpjkq", UID:"7044e932-f0f4-4432-96c6-63d266c8faff", APIVersion:"v1", ResourceVersion:"496", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-715187"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1585464d1a5fd7b, ext:221528253663, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1585464d2282571, ext:221536783573, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-vpjkq.17a211dd4e26fb7b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 23:47:34 ingress-addon-legacy-715187 kubelet[1632]: W1218 23:47:34.370534    1632 pod_container_deletor.go:77] Container "bc1e956dd0db1585844fc89880abbecfff200b56561aea77c9bc81c7bf7587ab" not found in pod's containers
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.485344    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-dhplp" (UniqueName: "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-ingress-nginx-token-dhplp") pod "7044e932-f0f4-4432-96c6-63d266c8faff" (UID: "7044e932-f0f4-4432-96c6-63d266c8faff")
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.485405    1632 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-webhook-cert") pod "7044e932-f0f4-4432-96c6-63d266c8faff" (UID: "7044e932-f0f4-4432-96c6-63d266c8faff")
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.492119    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7044e932-f0f4-4432-96c6-63d266c8faff" (UID: "7044e932-f0f4-4432-96c6-63d266c8faff"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.495112    1632 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-ingress-nginx-token-dhplp" (OuterVolumeSpecName: "ingress-nginx-token-dhplp") pod "7044e932-f0f4-4432-96c6-63d266c8faff" (UID: "7044e932-f0f4-4432-96c6-63d266c8faff"). InnerVolumeSpecName "ingress-nginx-token-dhplp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.585737    1632 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-webhook-cert") on node "ingress-addon-legacy-715187" DevicePath ""
	Dec 18 23:47:35 ingress-addon-legacy-715187 kubelet[1632]: I1218 23:47:35.585802    1632 reconciler.go:319] Volume detached for volume "ingress-nginx-token-dhplp" (UniqueName: "kubernetes.io/secret/7044e932-f0f4-4432-96c6-63d266c8faff-ingress-nginx-token-dhplp") on node "ingress-addon-legacy-715187" DevicePath ""
	
	* 
	* ==> storage-provisioner [92fd8719493185be40deb2139c95371a1bda75cd0c76b90535e0a969b42b2ad7] <==
	* I1218 23:44:13.849715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 23:44:13.861979       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 23:44:13.862077       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 23:44:13.868856       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 23:44:13.869296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ff5b9d6-d35d-4948-95a0-e598f4179d20", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-715187_d41086d6-a92b-4ece-81d1-2a4f8977eaf1 became leader
	I1218 23:44:13.869364       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-715187_d41086d6-a92b-4ece-81d1-2a4f8977eaf1!
	I1218 23:44:13.970215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-715187_d41086d6-a92b-4ece-81d1-2a4f8977eaf1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-715187 -n ingress-addon-legacy-715187
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-715187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- sh -c "ping -c 1 192.168.58.1": exit status 1 (229.045356ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-9rw5h): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (229.853013ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-tdcv5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-320272
helpers_test.go:235: (dbg) docker inspect multinode-320272:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851",
	        "Created": "2023-12-18T23:53:38.448087429Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 881918,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T23:53:38.772895232Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/hostname",
	        "HostsPath": "/var/lib/docker/containers/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/hosts",
	        "LogPath": "/var/lib/docker/containers/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851-json.log",
	        "Name": "/multinode-320272",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-320272:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-320272",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ef3cc3e92520427046ae5cc04fb0b63232e584a8fec99414eb609a731cd3fbda-init/diff:/var/lib/docker/overlay2/db874852d391376facd52e960a3e68faa10fa2be9d9e14dbf2dda2d1f908e37e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ef3cc3e92520427046ae5cc04fb0b63232e584a8fec99414eb609a731cd3fbda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ef3cc3e92520427046ae5cc04fb0b63232e584a8fec99414eb609a731cd3fbda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ef3cc3e92520427046ae5cc04fb0b63232e584a8fec99414eb609a731cd3fbda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-320272",
	                "Source": "/var/lib/docker/volumes/multinode-320272/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-320272",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-320272",
	                "name.minikube.sigs.k8s.io": "multinode-320272",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bf53a6e94bb41f70edfab1b11c29088229f41df8af5045a68f255a00d4093ee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6bf53a6e94bb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-320272": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "71070d7623c1",
	                        "multinode-320272"
	                    ],
	                    "NetworkID": "b3740fa51eb7ffaa0b4bee93ecf90a54451831b7c45db6b18c8bf957f165d9fa",
	                    "EndpointID": "5906cc9772bbb9adc7a8b3d730d994583ab6a11533ee8b9a5788bdeaddb5c5b1",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-320272 -n multinode-320272
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-320272 logs -n 25: (1.554888771s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-530270                           | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-530270 ssh -- ls                    | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-528112                           | mount-start-1-528112 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-530270 ssh -- ls                    | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-530270                           | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	| start   | -p mount-start-2-530270                           | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	| ssh     | mount-start-2-530270 ssh -- ls                    | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-530270                           | mount-start-2-530270 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	| delete  | -p mount-start-1-528112                           | mount-start-1-528112 | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:53 UTC |
	| start   | -p multinode-320272                               | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:53 UTC | 18 Dec 23 23:55 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- apply -f                   | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- rollout                    | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- get pods -o                | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- get pods -o                | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-9rw5h --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-tdcv5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-9rw5h --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-tdcv5 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-9rw5h -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-tdcv5 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- get pods -o                | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-9rw5h                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC |                     |
	|         | busybox-5bc68d56bd-9rw5h -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC | 18 Dec 23 23:55 UTC |
	|         | busybox-5bc68d56bd-tdcv5                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-320272 -- exec                       | multinode-320272     | jenkins | v1.32.0 | 18 Dec 23 23:55 UTC |                     |
	|         | busybox-5bc68d56bd-tdcv5 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:53:32
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:53:32.995219  881462 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:53:32.995455  881462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:53:32.995468  881462 out.go:309] Setting ErrFile to fd 2...
	I1218 23:53:32.995474  881462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:53:32.995786  881462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:53:32.996270  881462 out.go:303] Setting JSON to false
	I1218 23:53:32.997179  881462 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":16555,"bootTime":1702927058,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:53:32.997246  881462 start.go:138] virtualization:  
	I1218 23:53:32.999743  881462 out.go:177] * [multinode-320272] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:53:33.004763  881462 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:53:33.004956  881462 notify.go:220] Checking for updates...
	I1218 23:53:33.010494  881462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:53:33.012328  881462 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:53:33.014117  881462 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:53:33.015975  881462 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:53:33.017892  881462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:53:33.019911  881462 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:53:33.045455  881462 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:53:33.045600  881462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:53:33.136602  881462 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 23:53:33.124950373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:53:33.136719  881462 docker.go:295] overlay module found
	I1218 23:53:33.139027  881462 out.go:177] * Using the docker driver based on user configuration
	I1218 23:53:33.140987  881462 start.go:298] selected driver: docker
	I1218 23:53:33.141006  881462 start.go:902] validating driver "docker" against <nil>
	I1218 23:53:33.141044  881462 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:53:33.141705  881462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:53:33.216875  881462 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 23:53:33.207570126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:53:33.217043  881462 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:53:33.217266  881462 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 23:53:33.219130  881462 out.go:177] * Using Docker driver with root privileges
	I1218 23:53:33.220662  881462 cni.go:84] Creating CNI manager for ""
	I1218 23:53:33.220683  881462 cni.go:136] 0 nodes found, recommending kindnet
	I1218 23:53:33.220694  881462 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:53:33.220710  881462 start_flags.go:323] config:
	{Name:multinode-320272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:53:33.222595  881462 out.go:177] * Starting control plane node multinode-320272 in cluster multinode-320272
	I1218 23:53:33.224312  881462 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:53:33.225770  881462 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:53:33.227156  881462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:53:33.227205  881462 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1218 23:53:33.227219  881462 cache.go:56] Caching tarball of preloaded images
	I1218 23:53:33.227240  881462 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:53:33.227317  881462 preload.go:174] Found /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1218 23:53:33.227328  881462 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1218 23:53:33.227681  881462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json ...
	I1218 23:53:33.227715  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json: {Name:mk5ae8967f6cf52cb0458bd3edae67abf1a59234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:33.244636  881462 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1218 23:53:33.244673  881462 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1218 23:53:33.244706  881462 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:53:33.244773  881462 start.go:365] acquiring machines lock for multinode-320272: {Name:mke86967fbd9185f17dab38cd34c1e3bfcf0b4c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:53:33.244893  881462 start.go:369] acquired machines lock for "multinode-320272" in 98.387µs
	I1218 23:53:33.244922  881462 start.go:93] Provisioning new machine with config: &{Name:multinode-320272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:53:33.245021  881462 start.go:125] createHost starting for "" (driver="docker")
	I1218 23:53:33.248252  881462 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1218 23:53:33.248538  881462 start.go:159] libmachine.API.Create for "multinode-320272" (driver="docker")
	I1218 23:53:33.248574  881462 client.go:168] LocalClient.Create starting
	I1218 23:53:33.248687  881462 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem
	I1218 23:53:33.248731  881462 main.go:141] libmachine: Decoding PEM data...
	I1218 23:53:33.248755  881462 main.go:141] libmachine: Parsing certificate...
	I1218 23:53:33.248811  881462 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem
	I1218 23:53:33.248834  881462 main.go:141] libmachine: Decoding PEM data...
	I1218 23:53:33.248845  881462 main.go:141] libmachine: Parsing certificate...
	I1218 23:53:33.249239  881462 cli_runner.go:164] Run: docker network inspect multinode-320272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 23:53:33.266465  881462 cli_runner.go:211] docker network inspect multinode-320272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 23:53:33.266548  881462 network_create.go:281] running [docker network inspect multinode-320272] to gather additional debugging logs...
	I1218 23:53:33.266570  881462 cli_runner.go:164] Run: docker network inspect multinode-320272
	W1218 23:53:33.284268  881462 cli_runner.go:211] docker network inspect multinode-320272 returned with exit code 1
	I1218 23:53:33.284303  881462 network_create.go:284] error running [docker network inspect multinode-320272]: docker network inspect multinode-320272: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-320272 not found
	I1218 23:53:33.284315  881462 network_create.go:286] output of [docker network inspect multinode-320272]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-320272 not found
	
	** /stderr **
	I1218 23:53:33.284411  881462 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:53:33.301935  881462 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-775245b59831 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:eb:d5:75:4c} reservation:<nil>}
	I1218 23:53:33.302303  881462 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024cfed0}
	I1218 23:53:33.302324  881462 network_create.go:124] attempt to create docker network multinode-320272 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1218 23:53:33.302389  881462 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-320272 multinode-320272
	I1218 23:53:33.378210  881462 network_create.go:108] docker network multinode-320272 192.168.58.0/24 created
	I1218 23:53:33.378245  881462 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-320272" container
	I1218 23:53:33.378322  881462 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:53:33.397658  881462 cli_runner.go:164] Run: docker volume create multinode-320272 --label name.minikube.sigs.k8s.io=multinode-320272 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:53:33.416293  881462 oci.go:103] Successfully created a docker volume multinode-320272
	I1218 23:53:33.416382  881462 cli_runner.go:164] Run: docker run --rm --name multinode-320272-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-320272 --entrypoint /usr/bin/test -v multinode-320272:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:53:34.042842  881462 oci.go:107] Successfully prepared a docker volume multinode-320272
	I1218 23:53:34.042891  881462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:53:34.042910  881462 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:53:34.042999  881462 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-320272:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:53:38.358773  881462 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-320272:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.315733496s)
	I1218 23:53:38.358807  881462 kic.go:203] duration metric: took 4.315893 seconds to extract preloaded images to volume
	W1218 23:53:38.358966  881462 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:53:38.359073  881462 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:53:38.432022  881462 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-320272 --name multinode-320272 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-320272 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-320272 --network multinode-320272 --ip 192.168.58.2 --volume multinode-320272:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:53:38.780410  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Running}}
	I1218 23:53:38.816549  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:53:38.843803  881462 cli_runner.go:164] Run: docker exec multinode-320272 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:53:38.917825  881462 oci.go:144] the created container "multinode-320272" has a running status.
	I1218 23:53:38.917852  881462 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa...
	I1218 23:53:40.349944  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1218 23:53:40.350031  881462 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:53:40.370642  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:53:40.388348  881462 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:53:40.388386  881462 kic_runner.go:114] Args: [docker exec --privileged multinode-320272 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:53:40.432980  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:53:40.450647  881462 machine.go:88] provisioning docker machine ...
	I1218 23:53:40.450677  881462 ubuntu.go:169] provisioning hostname "multinode-320272"
	I1218 23:53:40.450743  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:40.470114  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:53:40.470564  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33516 <nil> <nil>}
	I1218 23:53:40.470582  881462 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-320272 && echo "multinode-320272" | sudo tee /etc/hostname
	I1218 23:53:40.634722  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320272
	
	I1218 23:53:40.634805  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:40.653698  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:53:40.654133  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33516 <nil> <nil>}
	I1218 23:53:40.654157  881462 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-320272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-320272/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-320272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:53:40.805257  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:53:40.805287  881462 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1218 23:53:40.805311  881462 ubuntu.go:177] setting up certificates
	I1218 23:53:40.805329  881462 provision.go:83] configureAuth start
	I1218 23:53:40.805394  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272
	I1218 23:53:40.824792  881462 provision.go:138] copyHostCerts
	I1218 23:53:40.824834  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:53:40.824868  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1218 23:53:40.824879  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:53:40.824955  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1218 23:53:40.825043  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:53:40.825076  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1218 23:53:40.825084  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:53:40.825117  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1218 23:53:40.825171  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:53:40.825198  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1218 23:53:40.825205  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:53:40.825232  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1218 23:53:40.825283  881462 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.multinode-320272 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-320272]
	I1218 23:53:41.005361  881462 provision.go:172] copyRemoteCerts
	I1218 23:53:41.005437  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:53:41.005486  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.026274  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:53:41.131813  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 23:53:41.131876  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 23:53:41.160105  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 23:53:41.160164  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 23:53:41.187643  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 23:53:41.187702  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 23:53:41.214719  881462 provision.go:86] duration metric: configureAuth took 409.371536ms
	I1218 23:53:41.214794  881462 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:53:41.215010  881462 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:53:41.215126  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.232967  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:53:41.233397  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33516 <nil> <nil>}
	I1218 23:53:41.233420  881462 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1218 23:53:41.500446  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1218 23:53:41.500473  881462 machine.go:91] provisioned docker machine in 1.049805272s
	I1218 23:53:41.500484  881462 client.go:171] LocalClient.Create took 8.251903536s
	I1218 23:53:41.500497  881462 start.go:167] duration metric: libmachine.API.Create for "multinode-320272" took 8.251959896s
	I1218 23:53:41.500504  881462 start.go:300] post-start starting for "multinode-320272" (driver="docker")
	I1218 23:53:41.500514  881462 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:53:41.500587  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:53:41.500639  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.530858  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:53:41.639008  881462 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:53:41.643141  881462 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1218 23:53:41.643161  881462 command_runner.go:130] > NAME="Ubuntu"
	I1218 23:53:41.643169  881462 command_runner.go:130] > VERSION_ID="22.04"
	I1218 23:53:41.643176  881462 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1218 23:53:41.643182  881462 command_runner.go:130] > VERSION_CODENAME=jammy
	I1218 23:53:41.643187  881462 command_runner.go:130] > ID=ubuntu
	I1218 23:53:41.643192  881462 command_runner.go:130] > ID_LIKE=debian
	I1218 23:53:41.643205  881462 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1218 23:53:41.643213  881462 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1218 23:53:41.643223  881462 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1218 23:53:41.643233  881462 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1218 23:53:41.643240  881462 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1218 23:53:41.643289  881462 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:53:41.643320  881462 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:53:41.643337  881462 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:53:41.643345  881462 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:53:41.643358  881462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1218 23:53:41.643416  881462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1218 23:53:41.643505  881462 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1218 23:53:41.643517  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /etc/ssl/certs/8173782.pem
	I1218 23:53:41.643621  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 23:53:41.654122  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:53:41.682951  881462 start.go:303] post-start completed in 182.431436ms
	I1218 23:53:41.683355  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272
	I1218 23:53:41.700797  881462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json ...
	I1218 23:53:41.701083  881462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:53:41.701137  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.718539  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:53:41.821985  881462 command_runner.go:130] > 18%!
	(MISSING)I1218 23:53:41.822080  881462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:53:41.827766  881462 command_runner.go:130] > 161G
	I1218 23:53:41.827812  881462 start.go:128] duration metric: createHost completed in 8.582779193s
	I1218 23:53:41.827824  881462 start.go:83] releasing machines lock for "multinode-320272", held for 8.5829189s
	I1218 23:53:41.827901  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272
	I1218 23:53:41.845824  881462 ssh_runner.go:195] Run: cat /version.json
	I1218 23:53:41.845871  881462 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:53:41.845879  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.845918  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:53:41.872206  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:53:41.880008  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:53:41.972416  881462 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702920864-17822", "minikube_version": "v1.32.0", "commit": "ef0b5630ad6ebb50e754541e2a9ebe20f96d24a4"}
	I1218 23:53:41.972550  881462 ssh_runner.go:195] Run: systemctl --version
	I1218 23:53:42.115387  881462 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 23:53:42.115459  881462 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1218 23:53:42.115481  881462 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1218 23:53:42.115553  881462 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1218 23:53:42.282165  881462 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:53:42.288538  881462 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1218 23:53:42.288567  881462 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1218 23:53:42.288576  881462 command_runner.go:130] > Device: 36h/54d	Inode: 3636410     Links: 1
	I1218 23:53:42.288584  881462 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:53:42.288591  881462 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1218 23:53:42.288598  881462 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1218 23:53:42.288629  881462 command_runner.go:130] > Change: 2023-12-18 23:32:03.407141962 +0000
	I1218 23:53:42.288640  881462 command_runner.go:130] >  Birth: 2023-12-18 23:32:03.407141962 +0000
	I1218 23:53:42.288716  881462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:53:42.319572  881462 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:53:42.319667  881462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:53:42.368004  881462 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1218 23:53:42.368099  881462 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:53:42.368124  881462 start.go:475] detecting cgroup driver to use...
	I1218 23:53:42.368184  881462 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:53:42.368297  881462 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 23:53:42.390707  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 23:53:42.406583  881462 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:53:42.406728  881462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:53:42.423375  881462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:53:42.440244  881462 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:53:42.534236  881462 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:53:42.642545  881462 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1218 23:53:42.642623  881462 docker.go:219] disabling docker service ...
	I1218 23:53:42.642707  881462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:53:42.664910  881462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:53:42.678921  881462 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:53:42.772230  881462 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1218 23:53:42.772328  881462 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:53:42.879711  881462 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1218 23:53:42.879850  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:53:42.893842  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:53:42.911873  881462 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1218 23:53:42.913179  881462 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1218 23:53:42.913240  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:53:42.925863  881462 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1218 23:53:42.926002  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:53:42.939794  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:53:42.951633  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:53:42.964059  881462 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:53:42.975172  881462 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:53:42.984482  881462 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 23:53:42.985813  881462 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:53:42.995935  881462 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:53:43.090134  881462 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1218 23:53:43.193258  881462 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1218 23:53:43.193372  881462 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1218 23:53:43.198089  881462 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1218 23:53:43.198167  881462 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 23:53:43.198188  881462 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1218 23:53:43.198210  881462 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:53:43.198242  881462 command_runner.go:130] > Access: 2023-12-18 23:53:43.176292507 +0000
	I1218 23:53:43.198270  881462 command_runner.go:130] > Modify: 2023-12-18 23:53:43.176292507 +0000
	I1218 23:53:43.198291  881462 command_runner.go:130] > Change: 2023-12-18 23:53:43.176292507 +0000
	I1218 23:53:43.198308  881462 command_runner.go:130] >  Birth: -
	I1218 23:53:43.198358  881462 start.go:543] Will wait 60s for crictl version
	I1218 23:53:43.198431  881462 ssh_runner.go:195] Run: which crictl
	I1218 23:53:43.203068  881462 command_runner.go:130] > /usr/bin/crictl
	I1218 23:53:43.203539  881462 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:53:43.243259  881462 command_runner.go:130] > Version:  0.1.0
	I1218 23:53:43.243335  881462 command_runner.go:130] > RuntimeName:  cri-o
	I1218 23:53:43.243358  881462 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1218 23:53:43.243379  881462 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 23:53:43.245815  881462 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1218 23:53:43.245961  881462 ssh_runner.go:195] Run: crio --version
	I1218 23:53:43.290705  881462 command_runner.go:130] > crio version 1.24.6
	I1218 23:53:43.290776  881462 command_runner.go:130] > Version:          1.24.6
	I1218 23:53:43.290805  881462 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1218 23:53:43.290824  881462 command_runner.go:130] > GitTreeState:     clean
	I1218 23:53:43.290854  881462 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1218 23:53:43.290879  881462 command_runner.go:130] > GoVersion:        go1.18.2
	I1218 23:53:43.290898  881462 command_runner.go:130] > Compiler:         gc
	I1218 23:53:43.290936  881462 command_runner.go:130] > Platform:         linux/arm64
	I1218 23:53:43.290970  881462 command_runner.go:130] > Linkmode:         dynamic
	I1218 23:53:43.290997  881462 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1218 23:53:43.291017  881462 command_runner.go:130] > SeccompEnabled:   true
	I1218 23:53:43.291035  881462 command_runner.go:130] > AppArmorEnabled:  false
	I1218 23:53:43.292453  881462 ssh_runner.go:195] Run: crio --version
	I1218 23:53:43.336394  881462 command_runner.go:130] > crio version 1.24.6
	I1218 23:53:43.336438  881462 command_runner.go:130] > Version:          1.24.6
	I1218 23:53:43.336455  881462 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1218 23:53:43.336461  881462 command_runner.go:130] > GitTreeState:     clean
	I1218 23:53:43.336468  881462 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1218 23:53:43.336474  881462 command_runner.go:130] > GoVersion:        go1.18.2
	I1218 23:53:43.336479  881462 command_runner.go:130] > Compiler:         gc
	I1218 23:53:43.336485  881462 command_runner.go:130] > Platform:         linux/arm64
	I1218 23:53:43.336492  881462 command_runner.go:130] > Linkmode:         dynamic
	I1218 23:53:43.336503  881462 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1218 23:53:43.336508  881462 command_runner.go:130] > SeccompEnabled:   true
	I1218 23:53:43.336515  881462 command_runner.go:130] > AppArmorEnabled:  false
	I1218 23:53:43.340357  881462 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1218 23:53:43.342150  881462 cli_runner.go:164] Run: docker network inspect multinode-320272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:53:43.360093  881462 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1218 23:53:43.364808  881462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:53:43.378810  881462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:53:43.378888  881462 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:53:43.444375  881462 command_runner.go:130] > {
	I1218 23:53:43.444397  881462 command_runner.go:130] >   "images": [
	I1218 23:53:43.444403  881462 command_runner.go:130] >     {
	I1218 23:53:43.444413  881462 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1218 23:53:43.444418  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.444426  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1218 23:53:43.444431  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444441  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.444460  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1218 23:53:43.444473  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1218 23:53:43.444481  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444487  881462 command_runner.go:130] >       "size": "60867618",
	I1218 23:53:43.444495  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.444501  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.444517  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.444527  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.444533  881462 command_runner.go:130] >     },
	I1218 23:53:43.444541  881462 command_runner.go:130] >     {
	I1218 23:53:43.444549  881462 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 23:53:43.444558  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.444564  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 23:53:43.444572  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444578  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.444591  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1218 23:53:43.444604  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 23:53:43.444612  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444621  881462 command_runner.go:130] >       "size": "29037500",
	I1218 23:53:43.444630  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.444636  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.444644  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.444649  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.444657  881462 command_runner.go:130] >     },
	I1218 23:53:43.444662  881462 command_runner.go:130] >     {
	I1218 23:53:43.444672  881462 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1218 23:53:43.444682  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.444692  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1218 23:53:43.444696  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444702  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.444715  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1218 23:53:43.444724  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1218 23:53:43.444732  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444737  881462 command_runner.go:130] >       "size": "51393451",
	I1218 23:53:43.444746  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.444751  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.444759  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.444765  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.444774  881462 command_runner.go:130] >     },
	I1218 23:53:43.444782  881462 command_runner.go:130] >     {
	I1218 23:53:43.444790  881462 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1218 23:53:43.444798  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.444805  881462 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1218 23:53:43.444811  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444822  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.444834  881462 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1218 23:53:43.444846  881462 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1218 23:53:43.444861  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444870  881462 command_runner.go:130] >       "size": "182203183",
	I1218 23:53:43.444875  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.444883  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.444888  881462 command_runner.go:130] >       },
	I1218 23:53:43.444893  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.444902  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.444907  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.444914  881462 command_runner.go:130] >     },
	I1218 23:53:43.444919  881462 command_runner.go:130] >     {
	I1218 23:53:43.444930  881462 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1218 23:53:43.444938  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.444944  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1218 23:53:43.444952  881462 command_runner.go:130] >       ],
	I1218 23:53:43.444957  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.444972  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1218 23:53:43.444985  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1218 23:53:43.444994  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445000  881462 command_runner.go:130] >       "size": "121119694",
	I1218 23:53:43.445008  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.445013  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.445020  881462 command_runner.go:130] >       },
	I1218 23:53:43.445026  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.445034  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.445039  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.445047  881462 command_runner.go:130] >     },
	I1218 23:53:43.445051  881462 command_runner.go:130] >     {
	I1218 23:53:43.445087  881462 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1218 23:53:43.445096  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.445104  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1218 23:53:43.445111  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445117  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.445129  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1218 23:53:43.445144  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1218 23:53:43.445152  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445157  881462 command_runner.go:130] >       "size": "117252916",
	I1218 23:53:43.445162  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.445171  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.445176  881462 command_runner.go:130] >       },
	I1218 23:53:43.445184  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.445190  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.445198  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.445206  881462 command_runner.go:130] >     },
	I1218 23:53:43.445211  881462 command_runner.go:130] >     {
	I1218 23:53:43.445222  881462 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1218 23:53:43.445230  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.445236  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1218 23:53:43.445243  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445249  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.445261  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1218 23:53:43.445274  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1218 23:53:43.445285  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445294  881462 command_runner.go:130] >       "size": "69992343",
	I1218 23:53:43.445299  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.445307  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.445313  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.445321  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.445326  881462 command_runner.go:130] >     },
	I1218 23:53:43.445332  881462 command_runner.go:130] >     {
	I1218 23:53:43.445340  881462 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1218 23:53:43.445348  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.445354  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1218 23:53:43.445362  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445367  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.445389  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1218 23:53:43.445403  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1218 23:53:43.445411  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445416  881462 command_runner.go:130] >       "size": "59253556",
	I1218 23:53:43.445421  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.445432  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.445440  881462 command_runner.go:130] >       },
	I1218 23:53:43.445446  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.445453  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.445459  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.445466  881462 command_runner.go:130] >     },
	I1218 23:53:43.445471  881462 command_runner.go:130] >     {
	I1218 23:53:43.445482  881462 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1218 23:53:43.445490  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.445496  881462 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1218 23:53:43.445501  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445510  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.445519  881462 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1218 23:53:43.445531  881462 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1218 23:53:43.445539  881462 command_runner.go:130] >       ],
	I1218 23:53:43.445545  881462 command_runner.go:130] >       "size": "520014",
	I1218 23:53:43.445552  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.445558  881462 command_runner.go:130] >         "value": "65535"
	I1218 23:53:43.445568  881462 command_runner.go:130] >       },
	I1218 23:53:43.445576  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.445582  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.445590  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.445594  881462 command_runner.go:130] >     }
	I1218 23:53:43.445599  881462 command_runner.go:130] >   ]
	I1218 23:53:43.445605  881462 command_runner.go:130] > }
	I1218 23:53:43.447976  881462 crio.go:496] all images are preloaded for cri-o runtime.
	I1218 23:53:43.447998  881462 crio.go:415] Images already preloaded, skipping extraction
	I1218 23:53:43.448054  881462 ssh_runner.go:195] Run: sudo crictl images --output json
	I1218 23:53:43.491570  881462 command_runner.go:130] > {
	I1218 23:53:43.491593  881462 command_runner.go:130] >   "images": [
	I1218 23:53:43.491599  881462 command_runner.go:130] >     {
	I1218 23:53:43.491609  881462 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1218 23:53:43.491614  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.491622  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1218 23:53:43.491629  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491634  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.491645  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1218 23:53:43.491657  881462 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1218 23:53:43.491665  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491670  881462 command_runner.go:130] >       "size": "60867618",
	I1218 23:53:43.491675  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.491683  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.491689  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.491694  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.491701  881462 command_runner.go:130] >     },
	I1218 23:53:43.491706  881462 command_runner.go:130] >     {
	I1218 23:53:43.491717  881462 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1218 23:53:43.491725  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.491731  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1218 23:53:43.491736  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491741  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.491751  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1218 23:53:43.491764  881462 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1218 23:53:43.491772  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491779  881462 command_runner.go:130] >       "size": "29037500",
	I1218 23:53:43.491786  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.491792  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.491799  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.491806  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.491812  881462 command_runner.go:130] >     },
	I1218 23:53:43.491817  881462 command_runner.go:130] >     {
	I1218 23:53:43.491824  881462 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1218 23:53:43.491829  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.491838  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1218 23:53:43.491843  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491848  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.491857  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1218 23:53:43.491873  881462 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1218 23:53:43.491877  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491883  881462 command_runner.go:130] >       "size": "51393451",
	I1218 23:53:43.491890  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.491897  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.491903  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.491910  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.491915  881462 command_runner.go:130] >     },
	I1218 23:53:43.491919  881462 command_runner.go:130] >     {
	I1218 23:53:43.491930  881462 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1218 23:53:43.491934  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.491941  881462 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1218 23:53:43.491962  881462 command_runner.go:130] >       ],
	I1218 23:53:43.491968  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.491977  881462 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1218 23:53:43.491986  881462 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1218 23:53:43.492000  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492005  881462 command_runner.go:130] >       "size": "182203183",
	I1218 23:53:43.492011  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.492019  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.492024  881462 command_runner.go:130] >       },
	I1218 23:53:43.492032  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492040  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492045  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492049  881462 command_runner.go:130] >     },
	I1218 23:53:43.492061  881462 command_runner.go:130] >     {
	I1218 23:53:43.492069  881462 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1218 23:53:43.492077  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.492084  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1218 23:53:43.492089  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492099  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.492111  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1218 23:53:43.492121  881462 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1218 23:53:43.492128  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492134  881462 command_runner.go:130] >       "size": "121119694",
	I1218 23:53:43.492139  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.492147  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.492152  881462 command_runner.go:130] >       },
	I1218 23:53:43.492158  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492167  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492172  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492177  881462 command_runner.go:130] >     },
	I1218 23:53:43.492184  881462 command_runner.go:130] >     {
	I1218 23:53:43.492192  881462 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1218 23:53:43.492197  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.492206  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1218 23:53:43.492228  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492237  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.492247  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1218 23:53:43.492261  881462 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1218 23:53:43.492265  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492271  881462 command_runner.go:130] >       "size": "117252916",
	I1218 23:53:43.492276  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.492283  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.492289  881462 command_runner.go:130] >       },
	I1218 23:53:43.492295  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492302  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492309  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492314  881462 command_runner.go:130] >     },
	I1218 23:53:43.492321  881462 command_runner.go:130] >     {
	I1218 23:53:43.492330  881462 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1218 23:53:43.492337  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.492343  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1218 23:53:43.492349  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492354  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.492364  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1218 23:53:43.492376  881462 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1218 23:53:43.492380  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492386  881462 command_runner.go:130] >       "size": "69992343",
	I1218 23:53:43.492394  881462 command_runner.go:130] >       "uid": null,
	I1218 23:53:43.492398  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492404  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492411  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492415  881462 command_runner.go:130] >     },
	I1218 23:53:43.492420  881462 command_runner.go:130] >     {
	I1218 23:53:43.492431  881462 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1218 23:53:43.492438  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.492445  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1218 23:53:43.492453  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492458  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.492480  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1218 23:53:43.492493  881462 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1218 23:53:43.492498  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492509  881462 command_runner.go:130] >       "size": "59253556",
	I1218 23:53:43.492514  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.492519  881462 command_runner.go:130] >         "value": "0"
	I1218 23:53:43.492523  881462 command_runner.go:130] >       },
	I1218 23:53:43.492528  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492535  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492543  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492549  881462 command_runner.go:130] >     },
	I1218 23:53:43.492556  881462 command_runner.go:130] >     {
	I1218 23:53:43.492563  881462 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1218 23:53:43.492571  881462 command_runner.go:130] >       "repoTags": [
	I1218 23:53:43.492580  881462 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1218 23:53:43.492584  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492590  881462 command_runner.go:130] >       "repoDigests": [
	I1218 23:53:43.492599  881462 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1218 23:53:43.492610  881462 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1218 23:53:43.492615  881462 command_runner.go:130] >       ],
	I1218 23:53:43.492622  881462 command_runner.go:130] >       "size": "520014",
	I1218 23:53:43.492629  881462 command_runner.go:130] >       "uid": {
	I1218 23:53:43.492634  881462 command_runner.go:130] >         "value": "65535"
	I1218 23:53:43.492641  881462 command_runner.go:130] >       },
	I1218 23:53:43.492646  881462 command_runner.go:130] >       "username": "",
	I1218 23:53:43.492651  881462 command_runner.go:130] >       "spec": null,
	I1218 23:53:43.492656  881462 command_runner.go:130] >       "pinned": false
	I1218 23:53:43.492663  881462 command_runner.go:130] >     }
	I1218 23:53:43.492667  881462 command_runner.go:130] >   ]
	I1218 23:53:43.492671  881462 command_runner.go:130] > }
	I1218 23:53:43.495474  881462 crio.go:496] all images are preloaded for cri-o runtime.
	I1218 23:53:43.495498  881462 cache_images.go:84] Images are preloaded, skipping loading
	I1218 23:53:43.495574  881462 ssh_runner.go:195] Run: crio config
	I1218 23:53:43.549626  881462 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1218 23:53:43.549655  881462 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1218 23:53:43.549664  881462 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1218 23:53:43.549669  881462 command_runner.go:130] > #
	I1218 23:53:43.549679  881462 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1218 23:53:43.549687  881462 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1218 23:53:43.549695  881462 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1218 23:53:43.549708  881462 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1218 23:53:43.549714  881462 command_runner.go:130] > # reload'.
	I1218 23:53:43.549724  881462 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1218 23:53:43.549735  881462 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1218 23:53:43.549743  881462 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1218 23:53:43.549753  881462 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1218 23:53:43.549760  881462 command_runner.go:130] > [crio]
	I1218 23:53:43.549767  881462 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1218 23:53:43.549776  881462 command_runner.go:130] > # containers images, in this directory.
	I1218 23:53:43.550315  881462 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1218 23:53:43.550340  881462 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1218 23:53:43.550797  881462 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1218 23:53:43.550814  881462 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1218 23:53:43.550823  881462 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1218 23:53:43.551268  881462 command_runner.go:130] > # storage_driver = "vfs"
	I1218 23:53:43.551284  881462 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1218 23:53:43.551292  881462 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1218 23:53:43.551515  881462 command_runner.go:130] > # storage_option = [
	I1218 23:53:43.551803  881462 command_runner.go:130] > # ]
	I1218 23:53:43.551820  881462 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1218 23:53:43.551829  881462 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1218 23:53:43.552320  881462 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1218 23:53:43.552337  881462 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1218 23:53:43.552346  881462 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1218 23:53:43.552352  881462 command_runner.go:130] > # always happen on a node reboot
	I1218 23:53:43.552801  881462 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1218 23:53:43.552817  881462 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1218 23:53:43.552825  881462 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1218 23:53:43.552846  881462 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1218 23:53:43.553334  881462 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1218 23:53:43.553354  881462 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1218 23:53:43.553366  881462 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1218 23:53:43.553837  881462 command_runner.go:130] > # internal_wipe = true
	I1218 23:53:43.553855  881462 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1218 23:53:43.553865  881462 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1218 23:53:43.553875  881462 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1218 23:53:43.554344  881462 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1218 23:53:43.554365  881462 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1218 23:53:43.554371  881462 command_runner.go:130] > [crio.api]
	I1218 23:53:43.554381  881462 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1218 23:53:43.554827  881462 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1218 23:53:43.554842  881462 command_runner.go:130] > # IP address on which the stream server will listen.
	I1218 23:53:43.555277  881462 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1218 23:53:43.555295  881462 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1218 23:53:43.555303  881462 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1218 23:53:43.555744  881462 command_runner.go:130] > # stream_port = "0"
	I1218 23:53:43.555767  881462 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1218 23:53:43.556276  881462 command_runner.go:130] > # stream_enable_tls = false
	I1218 23:53:43.556292  881462 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1218 23:53:43.556649  881462 command_runner.go:130] > # stream_idle_timeout = ""
	I1218 23:53:43.556671  881462 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1218 23:53:43.556681  881462 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1218 23:53:43.556689  881462 command_runner.go:130] > # minutes.
	I1218 23:53:43.557032  881462 command_runner.go:130] > # stream_tls_cert = ""
	I1218 23:53:43.557049  881462 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1218 23:53:43.557057  881462 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1218 23:53:43.557435  881462 command_runner.go:130] > # stream_tls_key = ""
	I1218 23:53:43.557452  881462 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1218 23:53:43.557461  881462 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1218 23:53:43.557468  881462 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1218 23:53:43.557820  881462 command_runner.go:130] > # stream_tls_ca = ""
	I1218 23:53:43.557838  881462 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1218 23:53:43.558293  881462 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1218 23:53:43.558311  881462 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1218 23:53:43.558789  881462 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1218 23:53:43.558817  881462 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1218 23:53:43.558826  881462 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1218 23:53:43.558833  881462 command_runner.go:130] > [crio.runtime]
	I1218 23:53:43.558841  881462 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1218 23:53:43.558851  881462 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1218 23:53:43.558856  881462 command_runner.go:130] > # "nofile=1024:2048"
	I1218 23:53:43.558864  881462 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1218 23:53:43.559108  881462 command_runner.go:130] > # default_ulimits = [
	I1218 23:53:43.559354  881462 command_runner.go:130] > # ]
	I1218 23:53:43.559370  881462 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1218 23:53:43.559825  881462 command_runner.go:130] > # no_pivot = false
	I1218 23:53:43.559840  881462 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1218 23:53:43.559849  881462 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1218 23:53:43.560314  881462 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1218 23:53:43.560330  881462 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1218 23:53:43.560337  881462 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1218 23:53:43.560346  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1218 23:53:43.560694  881462 command_runner.go:130] > # conmon = ""
	I1218 23:53:43.560709  881462 command_runner.go:130] > # Cgroup setting for conmon
	I1218 23:53:43.560720  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1218 23:53:43.560951  881462 command_runner.go:130] > conmon_cgroup = "pod"
	I1218 23:53:43.560969  881462 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1218 23:53:43.560976  881462 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1218 23:53:43.560985  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1218 23:53:43.561203  881462 command_runner.go:130] > # conmon_env = [
	I1218 23:53:43.561451  881462 command_runner.go:130] > # ]
	I1218 23:53:43.561467  881462 command_runner.go:130] > # Additional environment variables to set for all the
	I1218 23:53:43.561474  881462 command_runner.go:130] > # containers. These are overridden if set in the
	I1218 23:53:43.561491  881462 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1218 23:53:43.561712  881462 command_runner.go:130] > # default_env = [
	I1218 23:53:43.561950  881462 command_runner.go:130] > # ]
	I1218 23:53:43.561966  881462 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1218 23:53:43.562410  881462 command_runner.go:130] > # selinux = false
	I1218 23:53:43.562430  881462 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1218 23:53:43.562438  881462 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1218 23:53:43.562446  881462 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1218 23:53:43.562800  881462 command_runner.go:130] > # seccomp_profile = ""
	I1218 23:53:43.562816  881462 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1218 23:53:43.562824  881462 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1218 23:53:43.562832  881462 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1218 23:53:43.562849  881462 command_runner.go:130] > # which might increase security.
	I1218 23:53:43.563303  881462 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1218 23:53:43.563320  881462 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1218 23:53:43.563329  881462 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1218 23:53:43.563337  881462 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1218 23:53:43.563344  881462 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1218 23:53:43.563351  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:53:43.563798  881462 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1218 23:53:43.563817  881462 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1218 23:53:43.563823  881462 command_runner.go:130] > # the cgroup blockio controller.
	I1218 23:53:43.564205  881462 command_runner.go:130] > # blockio_config_file = ""
	I1218 23:53:43.564223  881462 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1218 23:53:43.564229  881462 command_runner.go:130] > # irqbalance daemon.
	I1218 23:53:43.564682  881462 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1218 23:53:43.564699  881462 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1218 23:53:43.564706  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:53:43.565051  881462 command_runner.go:130] > # rdt_config_file = ""
	I1218 23:53:43.565078  881462 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1218 23:53:43.565323  881462 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1218 23:53:43.565339  881462 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1218 23:53:43.565682  881462 command_runner.go:130] > # separate_pull_cgroup = ""
	I1218 23:53:43.565699  881462 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1218 23:53:43.565707  881462 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1218 23:53:43.565712  881462 command_runner.go:130] > # will be added.
	I1218 23:53:43.565920  881462 command_runner.go:130] > # default_capabilities = [
	I1218 23:53:43.566876  881462 command_runner.go:130] > # 	"CHOWN",
	I1218 23:53:43.567150  881462 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1218 23:53:43.567425  881462 command_runner.go:130] > # 	"FSETID",
	I1218 23:53:43.567702  881462 command_runner.go:130] > # 	"FOWNER",
	I1218 23:53:43.568025  881462 command_runner.go:130] > # 	"SETGID",
	I1218 23:53:43.568325  881462 command_runner.go:130] > # 	"SETUID",
	I1218 23:53:43.568614  881462 command_runner.go:130] > # 	"SETPCAP",
	I1218 23:53:43.568904  881462 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1218 23:53:43.569298  881462 command_runner.go:130] > # 	"KILL",
	I1218 23:53:43.569598  881462 command_runner.go:130] > # ]
	I1218 23:53:43.569627  881462 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1218 23:53:43.569652  881462 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1218 23:53:43.570228  881462 command_runner.go:130] > # add_inheritable_capabilities = true
	I1218 23:53:43.570248  881462 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1218 23:53:43.570257  881462 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1218 23:53:43.570664  881462 command_runner.go:130] > # default_sysctls = [
	I1218 23:53:43.570995  881462 command_runner.go:130] > # ]
	I1218 23:53:43.571010  881462 command_runner.go:130] > # List of devices on the host that a
	I1218 23:53:43.571018  881462 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1218 23:53:43.571294  881462 command_runner.go:130] > # allowed_devices = [
	I1218 23:53:43.571606  881462 command_runner.go:130] > # 	"/dev/fuse",
	I1218 23:53:43.571893  881462 command_runner.go:130] > # ]
	I1218 23:53:43.571909  881462 command_runner.go:130] > # List of additional devices. specified as
	I1218 23:53:43.571966  881462 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1218 23:53:43.571977  881462 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1218 23:53:43.571985  881462 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1218 23:53:43.572296  881462 command_runner.go:130] > # additional_devices = [
	I1218 23:53:43.572590  881462 command_runner.go:130] > # ]
	I1218 23:53:43.572614  881462 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1218 23:53:43.572906  881462 command_runner.go:130] > # cdi_spec_dirs = [
	I1218 23:53:43.573270  881462 command_runner.go:130] > # 	"/etc/cdi",
	I1218 23:53:43.573560  881462 command_runner.go:130] > # 	"/var/run/cdi",
	I1218 23:53:43.573841  881462 command_runner.go:130] > # ]
	I1218 23:53:43.573857  881462 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1218 23:53:43.573865  881462 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1218 23:53:43.573870  881462 command_runner.go:130] > # Defaults to false.
	I1218 23:53:43.574405  881462 command_runner.go:130] > # device_ownership_from_security_context = false
	I1218 23:53:43.574423  881462 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1218 23:53:43.574431  881462 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1218 23:53:43.574694  881462 command_runner.go:130] > # hooks_dir = [
	I1218 23:53:43.575021  881462 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1218 23:53:43.575312  881462 command_runner.go:130] > # ]
	I1218 23:53:43.575329  881462 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1218 23:53:43.575339  881462 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1218 23:53:43.575346  881462 command_runner.go:130] > # its default mounts from the following two files:
	I1218 23:53:43.575350  881462 command_runner.go:130] > #
	I1218 23:53:43.575365  881462 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1218 23:53:43.575376  881462 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1218 23:53:43.575383  881462 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1218 23:53:43.575390  881462 command_runner.go:130] > #
	I1218 23:53:43.575405  881462 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1218 23:53:43.575416  881462 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1218 23:53:43.575427  881462 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1218 23:53:43.575433  881462 command_runner.go:130] > #      only add mounts it finds in this file.
	I1218 23:53:43.575437  881462 command_runner.go:130] > #
	I1218 23:53:43.575873  881462 command_runner.go:130] > # default_mounts_file = ""
	I1218 23:53:43.575890  881462 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1218 23:53:43.575900  881462 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1218 23:53:43.576191  881462 command_runner.go:130] > # pids_limit = 0
	I1218 23:53:43.576206  881462 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1218 23:53:43.576214  881462 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1218 23:53:43.576222  881462 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1218 23:53:43.576232  881462 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1218 23:53:43.576238  881462 command_runner.go:130] > # log_size_max = -1
	I1218 23:53:43.576247  881462 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1218 23:53:43.576257  881462 command_runner.go:130] > # log_to_journald = false
	I1218 23:53:43.576265  881462 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1218 23:53:43.576421  881462 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1218 23:53:43.576433  881462 command_runner.go:130] > # Path to directory for container attach sockets.
	I1218 23:53:43.576439  881462 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1218 23:53:43.576446  881462 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1218 23:53:43.576451  881462 command_runner.go:130] > # bind_mount_prefix = ""
	I1218 23:53:43.576458  881462 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1218 23:53:43.576463  881462 command_runner.go:130] > # read_only = false
	I1218 23:53:43.576471  881462 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1218 23:53:43.576478  881462 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1218 23:53:43.576494  881462 command_runner.go:130] > # live configuration reload.
	I1218 23:53:43.576500  881462 command_runner.go:130] > # log_level = "info"
	I1218 23:53:43.576507  881462 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1218 23:53:43.576513  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:53:43.576518  881462 command_runner.go:130] > # log_filter = ""
	I1218 23:53:43.576525  881462 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1218 23:53:43.576532  881462 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1218 23:53:43.576537  881462 command_runner.go:130] > # separated by comma.
	I1218 23:53:43.576542  881462 command_runner.go:130] > # uid_mappings = ""
	I1218 23:53:43.576549  881462 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1218 23:53:43.576557  881462 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1218 23:53:43.576565  881462 command_runner.go:130] > # separated by comma.
	I1218 23:53:43.576570  881462 command_runner.go:130] > # gid_mappings = ""
	I1218 23:53:43.576577  881462 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1218 23:53:43.576584  881462 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1218 23:53:43.576592  881462 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1218 23:53:43.576597  881462 command_runner.go:130] > # minimum_mappable_uid = -1
	I1218 23:53:43.576604  881462 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1218 23:53:43.576613  881462 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1218 23:53:43.576621  881462 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1218 23:53:43.576626  881462 command_runner.go:130] > # minimum_mappable_gid = -1
	I1218 23:53:43.576634  881462 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1218 23:53:43.576642  881462 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1218 23:53:43.576651  881462 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1218 23:53:43.576795  881462 command_runner.go:130] > # ctr_stop_timeout = 30
	I1218 23:53:43.576809  881462 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1218 23:53:43.576817  881462 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1218 23:53:43.576834  881462 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1218 23:53:43.576854  881462 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1218 23:53:43.577037  881462 command_runner.go:130] > # drop_infra_ctr = true
	I1218 23:53:43.577056  881462 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1218 23:53:43.577071  881462 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1218 23:53:43.577082  881462 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1218 23:53:43.577087  881462 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1218 23:53:43.577095  881462 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1218 23:53:43.577105  881462 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1218 23:53:43.577111  881462 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1218 23:53:43.577124  881462 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1218 23:53:43.577129  881462 command_runner.go:130] > # pinns_path = ""
	I1218 23:53:43.577141  881462 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1218 23:53:43.577149  881462 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1218 23:53:43.577157  881462 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1218 23:53:43.577162  881462 command_runner.go:130] > # default_runtime = "runc"
	I1218 23:53:43.577173  881462 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1218 23:53:43.577185  881462 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1218 23:53:43.577203  881462 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1218 23:53:43.577213  881462 command_runner.go:130] > # creation as a file is not desired either.
	I1218 23:53:43.577224  881462 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1218 23:53:43.577233  881462 command_runner.go:130] > # the hostname is being managed dynamically.
	I1218 23:53:43.577239  881462 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1218 23:53:43.577243  881462 command_runner.go:130] > # ]
	I1218 23:53:43.577251  881462 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1218 23:53:43.577263  881462 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1218 23:53:43.577272  881462 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1218 23:53:43.577282  881462 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1218 23:53:43.577286  881462 command_runner.go:130] > #
	I1218 23:53:43.577292  881462 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1218 23:53:43.577298  881462 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1218 23:53:43.577303  881462 command_runner.go:130] > #  runtime_type = "oci"
	I1218 23:53:43.577308  881462 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1218 23:53:43.577314  881462 command_runner.go:130] > #  privileged_without_host_devices = false
	I1218 23:53:43.577320  881462 command_runner.go:130] > #  allowed_annotations = []
	I1218 23:53:43.577324  881462 command_runner.go:130] > # Where:
	I1218 23:53:43.577334  881462 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1218 23:53:43.577343  881462 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1218 23:53:43.577355  881462 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1218 23:53:43.577363  881462 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1218 23:53:43.577371  881462 command_runner.go:130] > #   in $PATH.
	I1218 23:53:43.577379  881462 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1218 23:53:43.577389  881462 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1218 23:53:43.577396  881462 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1218 23:53:43.577401  881462 command_runner.go:130] > #   state.
	I1218 23:53:43.577409  881462 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1218 23:53:43.577418  881462 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1218 23:53:43.577425  881462 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1218 23:53:43.577432  881462 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1218 23:53:43.577440  881462 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1218 23:53:43.577448  881462 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1218 23:53:43.577454  881462 command_runner.go:130] > #   The currently recognized values are:
	I1218 23:53:43.577462  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1218 23:53:43.577471  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1218 23:53:43.577480  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1218 23:53:43.577488  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1218 23:53:43.577497  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1218 23:53:43.577506  881462 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1218 23:53:43.577517  881462 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1218 23:53:43.577525  881462 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1218 23:53:43.577534  881462 command_runner.go:130] > #   should be moved to the container's cgroup
	I1218 23:53:43.577539  881462 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1218 23:53:43.577546  881462 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1218 23:53:43.577554  881462 command_runner.go:130] > runtime_type = "oci"
	I1218 23:53:43.577561  881462 command_runner.go:130] > runtime_root = "/run/runc"
	I1218 23:53:43.577567  881462 command_runner.go:130] > runtime_config_path = ""
	I1218 23:53:43.577572  881462 command_runner.go:130] > monitor_path = ""
	I1218 23:53:43.577578  881462 command_runner.go:130] > monitor_cgroup = ""
	I1218 23:53:43.577583  881462 command_runner.go:130] > monitor_exec_cgroup = ""
	I1218 23:53:43.577625  881462 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1218 23:53:43.577634  881462 command_runner.go:130] > # running containers
	I1218 23:53:43.577639  881462 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1218 23:53:43.577649  881462 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1218 23:53:43.577660  881462 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1218 23:53:43.577670  881462 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1218 23:53:43.577676  881462 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1218 23:53:43.577682  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1218 23:53:43.577688  881462 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1218 23:53:43.577693  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1218 23:53:43.577699  881462 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1218 23:53:43.577704  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1218 23:53:43.577713  881462 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1218 23:53:43.577719  881462 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1218 23:53:43.577727  881462 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1218 23:53:43.577736  881462 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1218 23:53:43.577748  881462 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1218 23:53:43.577760  881462 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1218 23:53:43.577772  881462 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1218 23:53:43.577785  881462 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1218 23:53:43.577792  881462 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1218 23:53:43.577806  881462 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1218 23:53:43.577810  881462 command_runner.go:130] > # Example:
	I1218 23:53:43.577816  881462 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1218 23:53:43.577822  881462 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1218 23:53:43.577828  881462 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1218 23:53:43.577839  881462 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1218 23:53:43.577844  881462 command_runner.go:130] > # cpuset = 0
	I1218 23:53:43.577849  881462 command_runner.go:130] > # cpushares = "0-1"
	I1218 23:53:43.577855  881462 command_runner.go:130] > # Where:
	I1218 23:53:43.577861  881462 command_runner.go:130] > # The workload name is workload-type.
	I1218 23:53:43.577870  881462 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1218 23:53:43.577877  881462 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1218 23:53:43.577884  881462 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1218 23:53:43.577894  881462 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1218 23:53:43.577900  881462 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1218 23:53:43.577905  881462 command_runner.go:130] > # 
	I1218 23:53:43.577912  881462 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1218 23:53:43.577917  881462 command_runner.go:130] > #
	I1218 23:53:43.577933  881462 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1218 23:53:43.577942  881462 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1218 23:53:43.577953  881462 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1218 23:53:43.577961  881462 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1218 23:53:43.577971  881462 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1218 23:53:43.577976  881462 command_runner.go:130] > [crio.image]
	I1218 23:53:43.577983  881462 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1218 23:53:43.577988  881462 command_runner.go:130] > # default_transport = "docker://"
	I1218 23:53:43.577996  881462 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1218 23:53:43.578003  881462 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1218 23:53:43.578008  881462 command_runner.go:130] > # global_auth_file = ""
	I1218 23:53:43.578014  881462 command_runner.go:130] > # The image used to instantiate infra containers.
	I1218 23:53:43.578023  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:53:43.578029  881462 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1218 23:53:43.578037  881462 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1218 23:53:43.578044  881462 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1218 23:53:43.578050  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:53:43.578055  881462 command_runner.go:130] > # pause_image_auth_file = ""
	I1218 23:53:43.578065  881462 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1218 23:53:43.578076  881462 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1218 23:53:43.578084  881462 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1218 23:53:43.578091  881462 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1218 23:53:43.578096  881462 command_runner.go:130] > # pause_command = "/pause"
	I1218 23:53:43.578103  881462 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1218 23:53:43.578111  881462 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1218 23:53:43.578119  881462 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1218 23:53:43.578129  881462 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1218 23:53:43.578136  881462 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1218 23:53:43.580600  881462 command_runner.go:130] > # signature_policy = ""
	I1218 23:53:43.580619  881462 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1218 23:53:43.580628  881462 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1218 23:53:43.580633  881462 command_runner.go:130] > # changing them here.
	I1218 23:53:43.580638  881462 command_runner.go:130] > # insecure_registries = [
	I1218 23:53:43.580643  881462 command_runner.go:130] > # ]
	I1218 23:53:43.580650  881462 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1218 23:53:43.580668  881462 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1218 23:53:43.580677  881462 command_runner.go:130] > # image_volumes = "mkdir"
	I1218 23:53:43.580686  881462 command_runner.go:130] > # Temporary directory to use for storing big files
	I1218 23:53:43.580692  881462 command_runner.go:130] > # big_files_temporary_dir = ""
	I1218 23:53:43.580699  881462 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1218 23:53:43.580710  881462 command_runner.go:130] > # CNI plugins.
	I1218 23:53:43.580718  881462 command_runner.go:130] > [crio.network]
	I1218 23:53:43.580725  881462 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1218 23:53:43.580731  881462 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1218 23:53:43.580739  881462 command_runner.go:130] > # cni_default_network = ""
	I1218 23:53:43.580746  881462 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1218 23:53:43.580754  881462 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1218 23:53:43.580761  881462 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1218 23:53:43.580766  881462 command_runner.go:130] > # plugin_dirs = [
	I1218 23:53:43.580771  881462 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1218 23:53:43.580775  881462 command_runner.go:130] > # ]
	I1218 23:53:43.580783  881462 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1218 23:53:43.580789  881462 command_runner.go:130] > [crio.metrics]
	I1218 23:53:43.580796  881462 command_runner.go:130] > # Globally enable or disable metrics support.
	I1218 23:53:43.580962  881462 command_runner.go:130] > # enable_metrics = false
	I1218 23:53:43.580980  881462 command_runner.go:130] > # Specify enabled metrics collectors.
	I1218 23:53:43.580986  881462 command_runner.go:130] > # Per default all metrics are enabled.
	I1218 23:53:43.580994  881462 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1218 23:53:43.581002  881462 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1218 23:53:43.581013  881462 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1218 23:53:43.581018  881462 command_runner.go:130] > # metrics_collectors = [
	I1218 23:53:43.581023  881462 command_runner.go:130] > # 	"operations",
	I1218 23:53:43.581036  881462 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1218 23:53:43.581042  881462 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1218 23:53:43.581047  881462 command_runner.go:130] > # 	"operations_errors",
	I1218 23:53:43.581071  881462 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1218 23:53:43.581080  881462 command_runner.go:130] > # 	"image_pulls_by_name",
	I1218 23:53:43.581086  881462 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1218 23:53:43.581093  881462 command_runner.go:130] > # 	"image_pulls_failures",
	I1218 23:53:43.581100  881462 command_runner.go:130] > # 	"image_pulls_successes",
	I1218 23:53:43.581105  881462 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1218 23:53:43.581110  881462 command_runner.go:130] > # 	"image_layer_reuse",
	I1218 23:53:43.581115  881462 command_runner.go:130] > # 	"containers_oom_total",
	I1218 23:53:43.581121  881462 command_runner.go:130] > # 	"containers_oom",
	I1218 23:53:43.581125  881462 command_runner.go:130] > # 	"processes_defunct",
	I1218 23:53:43.581132  881462 command_runner.go:130] > # 	"operations_total",
	I1218 23:53:43.581138  881462 command_runner.go:130] > # 	"operations_latency_seconds",
	I1218 23:53:43.581146  881462 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1218 23:53:43.581151  881462 command_runner.go:130] > # 	"operations_errors_total",
	I1218 23:53:43.581157  881462 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1218 23:53:43.581167  881462 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1218 23:53:43.581175  881462 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1218 23:53:43.581183  881462 command_runner.go:130] > # 	"image_pulls_success_total",
	I1218 23:53:43.581189  881462 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1218 23:53:43.581194  881462 command_runner.go:130] > # 	"containers_oom_count_total",
	I1218 23:53:43.581199  881462 command_runner.go:130] > # ]
	I1218 23:53:43.581205  881462 command_runner.go:130] > # The port on which the metrics server will listen.
	I1218 23:53:43.581210  881462 command_runner.go:130] > # metrics_port = 9090
	I1218 23:53:43.581217  881462 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1218 23:53:43.581224  881462 command_runner.go:130] > # metrics_socket = ""
	I1218 23:53:43.581230  881462 command_runner.go:130] > # The certificate for the secure metrics server.
	I1218 23:53:43.581238  881462 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1218 23:53:43.581248  881462 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1218 23:53:43.581259  881462 command_runner.go:130] > # certificate on any modification event.
	I1218 23:53:43.581265  881462 command_runner.go:130] > # metrics_cert = ""
	I1218 23:53:43.581271  881462 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1218 23:53:43.581277  881462 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1218 23:53:43.581283  881462 command_runner.go:130] > # metrics_key = ""
	I1218 23:53:43.581292  881462 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1218 23:53:43.581300  881462 command_runner.go:130] > [crio.tracing]
	I1218 23:53:43.581307  881462 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1218 23:53:43.581313  881462 command_runner.go:130] > # enable_tracing = false
	I1218 23:53:43.581322  881462 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1218 23:53:43.581327  881462 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1218 23:53:43.581335  881462 command_runner.go:130] > # Number of samples to collect per million spans.
	I1218 23:53:43.581344  881462 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1218 23:53:43.581351  881462 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1218 23:53:43.581356  881462 command_runner.go:130] > [crio.stats]
	I1218 23:53:43.581363  881462 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1218 23:53:43.581372  881462 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1218 23:53:43.581380  881462 command_runner.go:130] > # stats_collection_period = 0
	I1218 23:53:43.581547  881462 command_runner.go:130] ! time="2023-12-18 23:53:43.543896207Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1218 23:53:43.581570  881462 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1218 23:53:43.581644  881462 cni.go:84] Creating CNI manager for ""
	I1218 23:53:43.581655  881462 cni.go:136] 1 nodes found, recommending kindnet
	I1218 23:53:43.581689  881462 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:53:43.581712  881462 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-320272 NodeName:multinode-320272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 23:53:43.581857  881462 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-320272"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:53:43.581917  881462 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-320272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:53:43.581997  881462 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 23:53:43.591715  881462 command_runner.go:130] > kubeadm
	I1218 23:53:43.591734  881462 command_runner.go:130] > kubectl
	I1218 23:53:43.591740  881462 command_runner.go:130] > kubelet
	I1218 23:53:43.593103  881462 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:53:43.593183  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 23:53:43.603819  881462 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1218 23:53:43.625565  881462 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 23:53:43.647610  881462 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1218 23:53:43.669296  881462 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:53:43.673906  881462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:53:43.688027  881462 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272 for IP: 192.168.58.2
	I1218 23:53:43.688059  881462 certs.go:190] acquiring lock for shared ca certs: {Name:mkb7306ae237ed30250289faa05e9a8d3ae56985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:43.688193  881462 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key
	I1218 23:53:43.688248  881462 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key
	I1218 23:53:43.688316  881462 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key
	I1218 23:53:43.688335  881462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt with IP's: []
	I1218 23:53:44.468783  881462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt ...
	I1218 23:53:44.468815  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt: {Name:mka5df82d4b1a2928b0435647ed4d9821fd9dbd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:44.469034  881462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key ...
	I1218 23:53:44.469047  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key: {Name:mkfdb1cb50590df59671b754e89b132dda4dbd67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:44.469150  881462 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key.cee25041
	I1218 23:53:44.469166  881462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 23:53:44.742827  881462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt.cee25041 ...
	I1218 23:53:44.742871  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt.cee25041: {Name:mk7bab92b43929615cbbd897431d6e6366548c0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:44.743075  881462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key.cee25041 ...
	I1218 23:53:44.743093  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key.cee25041: {Name:mk2b8421d763f07c2c3a366b8b9416f6f906b525 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:44.743185  881462 certs.go:337] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt
	I1218 23:53:44.743278  881462 certs.go:341] copying /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key
	I1218 23:53:44.743340  881462 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.key
	I1218 23:53:44.743357  881462 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.crt with IP's: []
	I1218 23:53:45.462178  881462 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.crt ...
	I1218 23:53:45.462213  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.crt: {Name:mk5043907bc2c73b5842147c1001305f1a4b0e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:45.462406  881462 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.key ...
	I1218 23:53:45.462423  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.key: {Name:mk8aba913527276aeb6af237fb36d5c424ae1b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:53:45.462510  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 23:53:45.462532  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 23:53:45.462544  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 23:53:45.462560  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 23:53:45.462576  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 23:53:45.462592  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 23:53:45.462608  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 23:53:45.462622  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 23:53:45.462678  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem (1338 bytes)
	W1218 23:53:45.462719  881462 certs.go:433] ignoring /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378_empty.pem, impossibly tiny 0 bytes
	I1218 23:53:45.462734  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:53:45.462763  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem (1078 bytes)
	I1218 23:53:45.462792  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:53:45.462826  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem (1679 bytes)
	I1218 23:53:45.462876  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:53:45.462906  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /usr/share/ca-certificates/8173782.pem
	I1218 23:53:45.462922  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:53:45.462933  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem -> /usr/share/ca-certificates/817378.pem
	I1218 23:53:45.463518  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 23:53:45.493530  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1218 23:53:45.522711  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 23:53:45.551934  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 23:53:45.581119  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:53:45.610272  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 23:53:45.638918  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:53:45.668641  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 23:53:45.698110  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /usr/share/ca-certificates/8173782.pem (1708 bytes)
	I1218 23:53:45.727055  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:53:45.756490  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem --> /usr/share/ca-certificates/817378.pem (1338 bytes)
	I1218 23:53:45.785163  881462 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 23:53:45.806908  881462 ssh_runner.go:195] Run: openssl version
	I1218 23:53:45.813900  881462 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1218 23:53:45.814039  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8173782.pem && ln -fs /usr/share/ca-certificates/8173782.pem /etc/ssl/certs/8173782.pem"
	I1218 23:53:45.825786  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8173782.pem
	I1218 23:53:45.830184  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 23:39 /usr/share/ca-certificates/8173782.pem
	I1218 23:53:45.830469  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 23:39 /usr/share/ca-certificates/8173782.pem
	I1218 23:53:45.830543  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8173782.pem
	I1218 23:53:45.839001  881462 command_runner.go:130] > 3ec20f2e
	I1218 23:53:45.839097  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8173782.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 23:53:45.850556  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:53:45.861882  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:53:45.866643  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:53:45.866678  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:53:45.866730  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:53:45.875152  881462 command_runner.go:130] > b5213941
	I1218 23:53:45.875614  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:53:45.886952  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/817378.pem && ln -fs /usr/share/ca-certificates/817378.pem /etc/ssl/certs/817378.pem"
	I1218 23:53:45.898350  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/817378.pem
	I1218 23:53:45.902683  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 23:39 /usr/share/ca-certificates/817378.pem
	I1218 23:53:45.902959  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 23:39 /usr/share/ca-certificates/817378.pem
	I1218 23:53:45.903014  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/817378.pem
	I1218 23:53:45.911414  881462 command_runner.go:130] > 51391683
	I1218 23:53:45.911563  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/817378.pem /etc/ssl/certs/51391683.0"
	I1218 23:53:45.922963  881462 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:53:45.927100  881462 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:53:45.927134  881462 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:53:45.927176  881462 kubeadm.go:404] StartCluster: {Name:multinode-320272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:53:45.927260  881462 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1218 23:53:45.927322  881462 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1218 23:53:45.979243  881462 cri.go:89] found id: ""
	I1218 23:53:45.979316  881462 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 23:53:45.990667  881462 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1218 23:53:45.990692  881462 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1218 23:53:45.990701  881462 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1218 23:53:45.990767  881462 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 23:53:46.003461  881462 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 23:53:46.003563  881462 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 23:53:46.016173  881462 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 23:53:46.016196  881462 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 23:53:46.016206  881462 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 23:53:46.016245  881462 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:53:46.016279  881462 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 23:53:46.016331  881462 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 23:53:46.075602  881462 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 23:53:46.075670  881462 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1218 23:53:46.075973  881462 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 23:53:46.075993  881462 command_runner.go:130] > [preflight] Running pre-flight checks
	I1218 23:53:46.119992  881462 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:53:46.120022  881462 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:53:46.120100  881462 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:53:46.120113  881462 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:53:46.120197  881462 kubeadm.go:322] OS: Linux
	I1218 23:53:46.120237  881462 command_runner.go:130] > OS: Linux
	I1218 23:53:46.120329  881462 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 23:53:46.120352  881462 command_runner.go:130] > CGROUPS_CPU: enabled
	I1218 23:53:46.120428  881462 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 23:53:46.120461  881462 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1218 23:53:46.120550  881462 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 23:53:46.120571  881462 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1218 23:53:46.120650  881462 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 23:53:46.120678  881462 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1218 23:53:46.120769  881462 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 23:53:46.120790  881462 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1218 23:53:46.120858  881462 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 23:53:46.120881  881462 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1218 23:53:46.120950  881462 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1218 23:53:46.120976  881462 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1218 23:53:46.121042  881462 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1218 23:53:46.121064  881462 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1218 23:53:46.121145  881462 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1218 23:53:46.121164  881462 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1218 23:53:46.216547  881462 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:53:46.216578  881462 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 23:53:46.216667  881462 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:53:46.216677  881462 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 23:53:46.216763  881462 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:53:46.216771  881462 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 23:53:46.464553  881462 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:53:46.468926  881462 out.go:204]   - Generating certificates and keys ...
	I1218 23:53:46.464912  881462 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 23:53:46.469052  881462 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 23:53:46.469073  881462 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 23:53:46.469147  881462 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 23:53:46.469158  881462 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 23:53:46.730685  881462 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:53:46.730717  881462 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 23:53:47.477662  881462 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:53:47.477731  881462 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1218 23:53:48.109745  881462 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 23:53:48.109771  881462 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1218 23:53:48.732605  881462 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 23:53:48.732631  881462 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1218 23:53:49.948529  881462 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 23:53:49.948555  881462 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1218 23:53:49.948891  881462 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-320272] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1218 23:53:49.948905  881462 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-320272] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1218 23:53:50.359264  881462 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 23:53:50.359294  881462 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1218 23:53:50.359422  881462 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-320272] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1218 23:53:50.359432  881462 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-320272] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1218 23:53:50.587771  881462 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:53:50.587796  881462 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 23:53:51.307637  881462 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:53:51.307663  881462 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 23:53:52.083298  881462 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 23:53:52.083328  881462 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1218 23:53:52.083694  881462 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:53:52.083711  881462 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 23:53:52.230647  881462 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:53:52.230671  881462 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 23:53:52.720067  881462 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:53:52.720094  881462 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 23:53:53.455122  881462 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:53:53.455150  881462 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 23:53:54.203903  881462 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:53:54.203958  881462 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 23:53:54.206535  881462 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:53:54.206560  881462 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 23:53:54.209905  881462 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:53:54.211987  881462 out.go:204]   - Booting up control plane ...
	I1218 23:53:54.209999  881462 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 23:53:54.212085  881462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:53:54.212101  881462 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 23:53:54.212173  881462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:53:54.212181  881462 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 23:53:54.213068  881462 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:53:54.213090  881462 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 23:53:54.224140  881462 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:53:54.224170  881462 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:53:54.225214  881462 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:53:54.225236  881462 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:53:54.225460  881462 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 23:53:54.225480  881462 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 23:53:54.328089  881462 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:53:54.328120  881462 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 23:54:01.332786  881462 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004042 seconds
	I1218 23:54:01.332816  881462 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.004042 seconds
	I1218 23:54:01.332917  881462 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:54:01.332929  881462 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 23:54:01.348171  881462 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:54:01.348202  881462 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 23:54:01.871416  881462 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:54:01.871445  881462 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1218 23:54:01.871626  881462 kubeadm.go:322] [mark-control-plane] Marking the node multinode-320272 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 23:54:01.871636  881462 command_runner.go:130] > [mark-control-plane] Marking the node multinode-320272 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 23:54:02.385622  881462 kubeadm.go:322] [bootstrap-token] Using token: roqrf0.n0eyk3mg7wap09ax
	I1218 23:54:02.387269  881462 out.go:204]   - Configuring RBAC rules ...
	I1218 23:54:02.385732  881462 command_runner.go:130] > [bootstrap-token] Using token: roqrf0.n0eyk3mg7wap09ax
	I1218 23:54:02.387394  881462 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:54:02.387411  881462 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 23:54:02.394384  881462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:54:02.394407  881462 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 23:54:02.402534  881462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:54:02.402562  881462 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 23:54:02.406557  881462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:54:02.406584  881462 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 23:54:02.410732  881462 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:54:02.410758  881462 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 23:54:02.414483  881462 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:54:02.414507  881462 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 23:54:02.428478  881462 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:54:02.428500  881462 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 23:54:02.670124  881462 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 23:54:02.670146  881462 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 23:54:02.804282  881462 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 23:54:02.804304  881462 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 23:54:02.808190  881462 kubeadm.go:322] 
	I1218 23:54:02.808263  881462 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 23:54:02.808272  881462 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1218 23:54:02.808278  881462 kubeadm.go:322] 
	I1218 23:54:02.808350  881462 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 23:54:02.808355  881462 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1218 23:54:02.808359  881462 kubeadm.go:322] 
	I1218 23:54:02.808384  881462 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 23:54:02.808389  881462 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1218 23:54:02.808445  881462 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:54:02.808450  881462 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 23:54:02.808497  881462 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:54:02.808502  881462 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 23:54:02.808506  881462 kubeadm.go:322] 
	I1218 23:54:02.808557  881462 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 23:54:02.808562  881462 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1218 23:54:02.808565  881462 kubeadm.go:322] 
	I1218 23:54:02.808610  881462 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 23:54:02.808615  881462 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 23:54:02.808619  881462 kubeadm.go:322] 
	I1218 23:54:02.808668  881462 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 23:54:02.808673  881462 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1218 23:54:02.808742  881462 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:54:02.808747  881462 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 23:54:02.808811  881462 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:54:02.808816  881462 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 23:54:02.808819  881462 kubeadm.go:322] 
	I1218 23:54:02.808905  881462 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:54:02.808910  881462 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1218 23:54:02.808982  881462 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 23:54:02.808986  881462 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1218 23:54:02.808991  881462 kubeadm.go:322] 
	I1218 23:54:02.809069  881462 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token roqrf0.n0eyk3mg7wap09ax \
	I1218 23:54:02.809074  881462 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token roqrf0.n0eyk3mg7wap09ax \
	I1218 23:54:02.809180  881462 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c \
	I1218 23:54:02.809186  881462 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c \
	I1218 23:54:02.809205  881462 kubeadm.go:322] 	--control-plane 
	I1218 23:54:02.809210  881462 command_runner.go:130] > 	--control-plane 
	I1218 23:54:02.809214  881462 kubeadm.go:322] 
	I1218 23:54:02.809293  881462 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:54:02.809298  881462 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1218 23:54:02.809306  881462 kubeadm.go:322] 
	I1218 23:54:02.809384  881462 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token roqrf0.n0eyk3mg7wap09ax \
	I1218 23:54:02.809388  881462 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token roqrf0.n0eyk3mg7wap09ax \
	I1218 23:54:02.809483  881462 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c 
	I1218 23:54:02.809488  881462 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c 
	I1218 23:54:02.809933  881462 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:54:02.809946  881462 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:54:02.810044  881462 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:54:02.810051  881462 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:54:02.810062  881462 cni.go:84] Creating CNI manager for ""
	I1218 23:54:02.810068  881462 cni.go:136] 1 nodes found, recommending kindnet
	I1218 23:54:02.813195  881462 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 23:54:02.815121  881462 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:54:02.827283  881462 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 23:54:02.827313  881462 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I1218 23:54:02.827324  881462 command_runner.go:130] > Device: 36h/54d	Inode: 3640141     Links: 1
	I1218 23:54:02.827332  881462 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:54:02.827339  881462 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I1218 23:54:02.827348  881462 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I1218 23:54:02.827354  881462 command_runner.go:130] > Change: 2023-12-18 23:32:04.107136004 +0000
	I1218 23:54:02.827361  881462 command_runner.go:130] >  Birth: 2023-12-18 23:32:04.063136379 +0000
	I1218 23:54:02.829909  881462 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 23:54:02.829934  881462 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:54:02.879845  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:54:03.739671  881462 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1218 23:54:03.746255  881462 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1218 23:54:03.755292  881462 command_runner.go:130] > serviceaccount/kindnet created
	I1218 23:54:03.766965  881462 command_runner.go:130] > daemonset.apps/kindnet created
	I1218 23:54:03.772347  881462 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 23:54:03.772421  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:03.772461  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=multinode-320272 minikube.k8s.io/updated_at=2023_12_18T23_54_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:03.940820  881462 command_runner.go:130] > node/multinode-320272 labeled
	I1218 23:54:03.945219  881462 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1218 23:54:03.945329  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:03.945374  881462 command_runner.go:130] > -16
	I1218 23:54:03.945389  881462 ops.go:34] apiserver oom_adj: -16
	I1218 23:54:04.051318  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:04.445907  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:04.537922  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:04.946380  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:05.037998  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:05.446332  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:05.539477  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:05.946034  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:06.066896  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:06.446227  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:06.540539  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:06.945938  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:07.037423  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:07.445473  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:07.541139  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:07.945512  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:08.047613  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:08.445628  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:08.538730  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:08.946142  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:09.062829  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:09.446402  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:09.535854  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:09.945468  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:10.043794  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:10.446343  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:10.531745  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:10.945795  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:11.041013  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:11.446152  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:11.543529  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:11.945788  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:12.069290  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:12.445805  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:12.540987  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:12.945556  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:13.064519  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:13.445531  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:13.539887  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:13.945443  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:14.045777  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:14.446104  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:14.533653  881462 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1218 23:54:14.946369  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:54:15.085973  881462 command_runner.go:130] > NAME      SECRETS   AGE
	I1218 23:54:15.085996  881462 command_runner.go:130] > default   0         1s
	I1218 23:54:15.089820  881462 kubeadm.go:1088] duration metric: took 11.31746379s to wait for elevateKubeSystemPrivileges.
	I1218 23:54:15.089856  881462 kubeadm.go:406] StartCluster complete in 29.162683358s
	I1218 23:54:15.089877  881462 settings.go:142] acquiring lock: {Name:mkb4ce0a07455c74d828d76d071a3ad023516aeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:54:15.089953  881462 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:54:15.090757  881462 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-812008/kubeconfig: {Name:mk19de5f3e7863c913095f8f2b91ab4519f12535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:54:15.091358  881462 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:54:15.091987  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 23:54:15.092289  881462 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:54:15.092434  881462 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 23:54:15.092565  881462 addons.go:69] Setting storage-provisioner=true in profile "multinode-320272"
	I1218 23:54:15.092584  881462 addons.go:231] Setting addon storage-provisioner=true in "multinode-320272"
	I1218 23:54:15.092648  881462 host.go:66] Checking if "multinode-320272" exists ...
	I1218 23:54:15.093174  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:54:15.091681  881462 kapi.go:59] client config for multinode-320272: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:54:15.094693  881462 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 23:54:15.094717  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:15.094728  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:15.094737  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:15.094994  881462 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 23:54:15.095556  881462 addons.go:69] Setting default-storageclass=true in profile "multinode-320272"
	I1218 23:54:15.095587  881462 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-320272"
	I1218 23:54:15.095935  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:54:15.152513  881462 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 23:54:15.154923  881462 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:54:15.154951  881462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 23:54:15.155021  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:54:15.157011  881462 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:54:15.157296  881462 kapi.go:59] client config for multinode-320272: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:54:15.157559  881462 addons.go:231] Setting addon default-storageclass=true in "multinode-320272"
	I1218 23:54:15.157587  881462 host.go:66] Checking if "multinode-320272" exists ...
	I1218 23:54:15.158025  881462 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:54:15.212353  881462 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 23:54:15.212374  881462 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 23:54:15.212438  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:54:15.212709  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:54:15.235780  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:54:15.323039  881462 round_trippers.go:574] Response Status: 200 OK in 228 milliseconds
	I1218 23:54:15.323111  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:15.323134  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:15 GMT
	I1218 23:54:15.323153  881462 round_trippers.go:580]     Audit-Id: 0c3fc05c-6bed-47c2-8a57-bb331d1a75b7
	I1218 23:54:15.323194  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:15.323216  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:15.323234  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:15.323267  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:15.323290  881462 round_trippers.go:580]     Content-Length: 291
	I1218 23:54:15.323987  881462 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"506a1bf3-be48-435b-8d09-a6642bb1a363","resourceVersion":"273","creationTimestamp":"2023-12-18T23:54:02Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 23:54:15.324516  881462 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"506a1bf3-be48-435b-8d09-a6642bb1a363","resourceVersion":"273","creationTimestamp":"2023-12-18T23:54:02Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 23:54:15.324610  881462 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 23:54:15.324634  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:15.324657  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:15.324696  881462 round_trippers.go:473]     Content-Type: application/json
	I1218 23:54:15.324716  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:15.353900  881462 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1218 23:54:15.353970  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:15.354007  881462 round_trippers.go:580]     Audit-Id: ca216b4b-e161-49bb-9b18-aee8c2a3faf7
	I1218 23:54:15.354031  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:15.354052  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:15.354083  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:15.354106  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:15.354124  881462 round_trippers.go:580]     Content-Length: 291
	I1218 23:54:15.354144  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:15 GMT
	I1218 23:54:15.354194  881462 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"506a1bf3-be48-435b-8d09-a6642bb1a363","resourceVersion":"355","creationTimestamp":"2023-12-18T23:54:02Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 23:54:15.369778  881462 command_runner.go:130] > apiVersion: v1
	I1218 23:54:15.369850  881462 command_runner.go:130] > data:
	I1218 23:54:15.369881  881462 command_runner.go:130] >   Corefile: |
	I1218 23:54:15.369900  881462 command_runner.go:130] >     .:53 {
	I1218 23:54:15.369931  881462 command_runner.go:130] >         errors
	I1218 23:54:15.369953  881462 command_runner.go:130] >         health {
	I1218 23:54:15.369972  881462 command_runner.go:130] >            lameduck 5s
	I1218 23:54:15.369991  881462 command_runner.go:130] >         }
	I1218 23:54:15.370020  881462 command_runner.go:130] >         ready
	I1218 23:54:15.370049  881462 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 23:54:15.370077  881462 command_runner.go:130] >            pods insecure
	I1218 23:54:15.370129  881462 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 23:54:15.370187  881462 command_runner.go:130] >            ttl 30
	I1218 23:54:15.370216  881462 command_runner.go:130] >         }
	I1218 23:54:15.370237  881462 command_runner.go:130] >         prometheus :9153
	I1218 23:54:15.370269  881462 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 23:54:15.370293  881462 command_runner.go:130] >            max_concurrent 1000
	I1218 23:54:15.370311  881462 command_runner.go:130] >         }
	I1218 23:54:15.370327  881462 command_runner.go:130] >         cache 30
	I1218 23:54:15.370355  881462 command_runner.go:130] >         loop
	I1218 23:54:15.370378  881462 command_runner.go:130] >         reload
	I1218 23:54:15.370396  881462 command_runner.go:130] >         loadbalance
	I1218 23:54:15.370413  881462 command_runner.go:130] >     }
	I1218 23:54:15.370441  881462 command_runner.go:130] > kind: ConfigMap
	I1218 23:54:15.370466  881462 command_runner.go:130] > metadata:
	I1218 23:54:15.370486  881462 command_runner.go:130] >   creationTimestamp: "2023-12-18T23:54:02Z"
	I1218 23:54:15.370516  881462 command_runner.go:130] >   name: coredns
	I1218 23:54:15.370538  881462 command_runner.go:130] >   namespace: kube-system
	I1218 23:54:15.370556  881462 command_runner.go:130] >   resourceVersion: "269"
	I1218 23:54:15.370576  881462 command_runner.go:130] >   uid: 16f8efeb-c652-4183-be4a-5200b56b778c
	I1218 23:54:15.371080  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 23:54:15.398657  881462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 23:54:15.441247  881462 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 23:54:15.595474  881462 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 23:54:15.595494  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:15.595503  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:15.595511  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:15.718309  881462 round_trippers.go:574] Response Status: 200 OK in 122 milliseconds
	I1218 23:54:15.718330  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:15.718339  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:15.718345  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:15.718351  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:15.718357  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:15.718364  881462 round_trippers.go:580]     Content-Length: 291
	I1218 23:54:15.718370  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:15 GMT
	I1218 23:54:15.718376  881462 round_trippers.go:580]     Audit-Id: eba40cd0-afda-457f-b415-c51ee56f11e2
	I1218 23:54:15.767109  881462 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"506a1bf3-be48-435b-8d09-a6642bb1a363","resourceVersion":"355","creationTimestamp":"2023-12-18T23:54:02Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1218 23:54:15.767228  881462 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-320272" context rescaled to 1 replicas
	I1218 23:54:15.767254  881462 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1218 23:54:15.769687  881462 out.go:177] * Verifying Kubernetes components...
	I1218 23:54:15.771638  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:54:15.986100  881462 command_runner.go:130] > configmap/coredns replaced
	I1218 23:54:15.986131  881462 start.go:929] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1218 23:54:16.031426  881462 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1218 23:54:16.036175  881462 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1218 23:54:16.036240  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:16.036262  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:16.036282  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:16.044712  881462 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1218 23:54:16.044783  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:16.044806  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:16.044825  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:16.044858  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:16.044879  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:16.044898  881462 round_trippers.go:580]     Content-Length: 1273
	I1218 23:54:16.044916  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:16 GMT
	I1218 23:54:16.044949  881462 round_trippers.go:580]     Audit-Id: 57809464-c52c-416f-92ea-4c214649dd1f
	I1218 23:54:16.045862  881462 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"c4fc0e07-25f7-4598-b545-9918e7d5548d","resourceVersion":"394","creationTimestamp":"2023-12-18T23:54:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T23:54:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1218 23:54:16.046392  881462 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c4fc0e07-25f7-4598-b545-9918e7d5548d","resourceVersion":"394","creationTimestamp":"2023-12-18T23:54:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T23:54:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 23:54:16.046479  881462 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1218 23:54:16.046512  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:16.046539  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:16.046561  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:16.046596  881462 round_trippers.go:473]     Content-Type: application/json
	I1218 23:54:16.053401  881462 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1218 23:54:16.053470  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:16.053493  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:16.053510  881462 round_trippers.go:580]     Content-Length: 1220
	I1218 23:54:16.053547  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:16 GMT
	I1218 23:54:16.053571  881462 round_trippers.go:580]     Audit-Id: d1dd909a-3035-43a0-b4ec-f2476b305d12
	I1218 23:54:16.053588  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:16.053606  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:16.053637  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:16.054414  881462 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c4fc0e07-25f7-4598-b545-9918e7d5548d","resourceVersion":"394","creationTimestamp":"2023-12-18T23:54:16Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-12-18T23:54:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1218 23:54:16.220463  881462 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1218 23:54:16.228050  881462 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1218 23:54:16.236668  881462 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 23:54:16.247669  881462 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1218 23:54:16.256896  881462 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1218 23:54:16.272500  881462 command_runner.go:130] > pod/storage-provisioner created
	I1218 23:54:16.276085  881462 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1218 23:54:16.274579  881462 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:54:16.278703  881462 kapi.go:59] client config for multinode-320272: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:54:16.279026  881462 node_ready.go:35] waiting up to 6m0s for node "multinode-320272" to be "Ready" ...
	I1218 23:54:16.279136  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:16.279147  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:16.279156  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:16.279163  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:16.279211  881462 addons.go:502] enable addons completed in 1.18678089s: enabled=[default-storageclass storage-provisioner]
	I1218 23:54:16.282845  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:16.282868  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:16.282889  881462 round_trippers.go:580]     Audit-Id: 33ee4f63-1eec-495a-914b-5eacc5d62ebd
	I1218 23:54:16.282895  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:16.282902  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:16.282908  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:16.282917  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:16.282925  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:16 GMT
	I1218 23:54:16.283048  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:16.779383  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:16.779407  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:16.779416  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:16.779423  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:16.781981  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:16.782044  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:16.782058  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:16.782065  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:16 GMT
	I1218 23:54:16.782077  881462 round_trippers.go:580]     Audit-Id: a2ed6fe9-3d03-41d4-9afa-d62628ef028d
	I1218 23:54:16.782090  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:16.782100  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:16.782107  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:16.782302  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:17.279451  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:17.279475  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:17.279485  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:17.279492  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:17.282087  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:17.282110  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:17.282119  881462 round_trippers.go:580]     Audit-Id: 584862fe-9cd3-4cda-9fae-a9f49a9bf1f9
	I1218 23:54:17.282125  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:17.282131  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:17.282137  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:17.282146  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:17.282153  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:17 GMT
	I1218 23:54:17.282423  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:17.779291  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:17.779348  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:17.779382  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:17.779406  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:17.782209  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:17.782276  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:17.782298  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:17.782316  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:17.782350  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:17.782374  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:17 GMT
	I1218 23:54:17.782393  881462 round_trippers.go:580]     Audit-Id: 6826e912-675d-4942-b480-6dd4538d503d
	I1218 23:54:17.782412  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:17.782616  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:18.279902  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:18.279934  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:18.279961  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:18.279976  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:18.282584  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:18.282623  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:18.282643  881462 round_trippers.go:580]     Audit-Id: 83c7ac3f-f9b1-4bee-bb28-1079e429aaf0
	I1218 23:54:18.282651  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:18.282663  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:18.282677  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:18.282684  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:18.282695  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:18 GMT
	I1218 23:54:18.282881  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:18.283326  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:18.779477  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:18.779498  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:18.779508  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:18.779515  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:18.782027  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:18.782051  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:18.782060  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:18.782124  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:18.782136  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:18 GMT
	I1218 23:54:18.782154  881462 round_trippers.go:580]     Audit-Id: 9b1fd09d-9add-4a02-84bd-8cddecce5338
	I1218 23:54:18.782160  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:18.782171  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:18.782289  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:19.279844  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:19.279867  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:19.279877  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:19.279884  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:19.282454  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:19.282513  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:19.282534  881462 round_trippers.go:580]     Audit-Id: 808c0d24-24fb-4666-a268-c60ffb173a95
	I1218 23:54:19.282554  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:19.282585  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:19.282606  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:19.282623  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:19.282643  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:19 GMT
	I1218 23:54:19.282768  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:19.779318  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:19.779343  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:19.779353  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:19.779360  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:19.781928  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:19.781951  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:19.781960  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:19.781967  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:19.781973  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:19.781980  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:19 GMT
	I1218 23:54:19.781990  881462 round_trippers.go:580]     Audit-Id: 2640528c-042f-423a-989c-414998d78ab6
	I1218 23:54:19.781996  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:19.782396  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:20.280099  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:20.280125  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:20.280135  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:20.280142  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:20.282985  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:20.283010  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:20.283019  881462 round_trippers.go:580]     Audit-Id: b0cc2f24-49cc-4f8a-a6a4-d6187c1356ca
	I1218 23:54:20.283025  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:20.283032  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:20.283038  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:20.283044  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:20.283050  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:20 GMT
	I1218 23:54:20.283264  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:20.283702  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:20.779275  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:20.779299  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:20.779309  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:20.779317  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:20.781870  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:20.781935  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:20.781958  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:20.781976  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:20 GMT
	I1218 23:54:20.782009  881462 round_trippers.go:580]     Audit-Id: cc6b7141-a4c7-4063-8575-e3debb532382
	I1218 23:54:20.782033  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:20.782045  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:20.782051  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:20.782233  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:21.279349  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:21.279382  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:21.279392  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:21.279399  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:21.282136  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:21.282159  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:21.282168  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:21.282176  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:21.282183  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:21.282189  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:21 GMT
	I1218 23:54:21.282200  881462 round_trippers.go:580]     Audit-Id: d51ef2da-60f8-4ddd-8a60-df11f832e5c4
	I1218 23:54:21.282206  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:21.282420  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:21.779515  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:21.779542  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:21.779552  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:21.779559  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:21.782171  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:21.782228  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:21.782249  881462 round_trippers.go:580]     Audit-Id: 05fe5ca2-587b-4694-958c-97d39b45d268
	I1218 23:54:21.782267  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:21.782285  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:21.782315  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:21.782338  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:21.782359  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:21 GMT
	I1218 23:54:21.782491  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:22.280130  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:22.280152  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:22.280162  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:22.280169  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:22.282914  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:22.282946  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:22.282957  881462 round_trippers.go:580]     Audit-Id: 70844a91-92ae-436f-9e2a-4c05abe23f8e
	I1218 23:54:22.282968  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:22.282974  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:22.282981  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:22.282991  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:22.282998  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:22 GMT
	I1218 23:54:22.283186  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:22.779660  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:22.779686  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:22.779696  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:22.779703  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:22.783055  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:22.783077  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:22.783086  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:22.783092  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:22 GMT
	I1218 23:54:22.783098  881462 round_trippers.go:580]     Audit-Id: 5b1ac296-12e3-42be-9230-111a6069585b
	I1218 23:54:22.783105  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:22.783115  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:22.783122  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:22.783615  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:22.784031  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:23.280106  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:23.280127  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:23.280137  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:23.280144  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:23.282817  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:23.282837  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:23.282845  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:23.282852  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:23.282858  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:23.282865  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:23.282871  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:23 GMT
	I1218 23:54:23.282877  881462 round_trippers.go:580]     Audit-Id: 95c42370-54c2-425f-861e-c16a76d6e9bb
	I1218 23:54:23.283528  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:23.780236  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:23.780259  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:23.780269  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:23.780276  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:23.782667  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:23.782691  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:23.782699  881462 round_trippers.go:580]     Audit-Id: d1557ea1-5d50-47d1-b90e-1051f5639be5
	I1218 23:54:23.782706  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:23.782712  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:23.782719  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:23.782731  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:23.782739  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:23 GMT
	I1218 23:54:23.783039  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:24.279719  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:24.279741  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:24.279751  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:24.279758  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:24.282372  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:24.282442  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:24.282491  881462 round_trippers.go:580]     Audit-Id: 0fba9ccd-9122-477c-9975-e9c0069d2e91
	I1218 23:54:24.282522  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:24.282534  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:24.282541  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:24.282548  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:24.282554  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:24 GMT
	I1218 23:54:24.282655  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:24.780213  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:24.780237  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:24.780247  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:24.780255  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:24.782940  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:24.782973  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:24.782982  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:24 GMT
	I1218 23:54:24.782988  881462 round_trippers.go:580]     Audit-Id: 6fe716c9-24a4-46bc-9dfb-4d8062519056
	I1218 23:54:24.782995  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:24.783001  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:24.783017  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:24.783027  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:24.783257  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:25.280190  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:25.280215  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:25.280224  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:25.280232  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:25.282709  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:25.282728  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:25.282736  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:25.282744  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:25 GMT
	I1218 23:54:25.282750  881462 round_trippers.go:580]     Audit-Id: fe23cc21-b6c1-4a63-9394-f9631f7d1465
	I1218 23:54:25.282757  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:25.282772  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:25.282778  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:25.283019  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:25.283458  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:25.779335  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:25.779358  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:25.779368  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:25.779375  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:25.782021  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:25.782080  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:25.782101  881462 round_trippers.go:580]     Audit-Id: 54597a18-1469-4c71-afe7-abeb91391d39
	I1218 23:54:25.782122  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:25.782153  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:25.782175  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:25.782193  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:25.782213  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:25 GMT
	I1218 23:54:25.782361  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:26.279925  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:26.279993  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:26.280004  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:26.280011  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:26.282553  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:26.282638  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:26.282648  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:26 GMT
	I1218 23:54:26.282654  881462 round_trippers.go:580]     Audit-Id: b4898a47-cfb0-4786-8da3-a6cc9b673d9a
	I1218 23:54:26.282661  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:26.282667  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:26.282708  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:26.282718  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:26.282815  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:26.779287  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:26.779321  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:26.779331  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:26.779339  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:26.781948  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:26.782032  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:26.782041  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:26.782048  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:26 GMT
	I1218 23:54:26.782055  881462 round_trippers.go:580]     Audit-Id: 6bd67571-8521-44fe-815d-a8012e5f1a6d
	I1218 23:54:26.782061  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:26.782067  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:26.782073  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:26.782197  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:27.279494  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:27.279520  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:27.279529  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:27.279536  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:27.282177  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:27.282201  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:27.282211  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:27.282219  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:27.282225  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:27.282232  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:27.282238  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:27 GMT
	I1218 23:54:27.282244  881462 round_trippers.go:580]     Audit-Id: 02ed886b-1372-4e2e-be07-83b21d6be5ae
	I1218 23:54:27.282488  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:27.779658  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:27.779688  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:27.779698  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:27.779705  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:27.782527  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:27.782552  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:27.782561  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:27.782568  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:27 GMT
	I1218 23:54:27.782574  881462 round_trippers.go:580]     Audit-Id: 5e99d8a6-cab1-4c6d-b3c8-60ef0c3a9e13
	I1218 23:54:27.782580  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:27.782587  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:27.782593  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:27.782749  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:27.783150  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:28.279699  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:28.279720  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:28.279730  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:28.279737  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:28.282254  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:28.282272  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:28.282281  881462 round_trippers.go:580]     Audit-Id: de4f8bac-6ca2-4614-ad1b-b02dbc9dd9fe
	I1218 23:54:28.282287  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:28.282294  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:28.282300  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:28.282307  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:28.282313  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:28 GMT
	I1218 23:54:28.282454  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:28.779960  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:28.779983  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:28.779992  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:28.779999  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:28.782533  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:28.782557  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:28.782566  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:28.782572  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:28.782580  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:28 GMT
	I1218 23:54:28.782589  881462 round_trippers.go:580]     Audit-Id: b7ceb398-0e3b-4020-806d-a0a14b14b3f3
	I1218 23:54:28.782595  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:28.782606  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:28.782796  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:29.279993  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:29.280016  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:29.280026  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:29.280033  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:29.282454  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:29.282480  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:29.282488  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:29.282495  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:29 GMT
	I1218 23:54:29.282502  881462 round_trippers.go:580]     Audit-Id: 447ffca6-968b-4fbf-b4a6-ab38faba9c10
	I1218 23:54:29.282509  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:29.282515  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:29.282523  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:29.282626  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:29.779293  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:29.779316  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:29.779325  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:29.779332  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:29.781909  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:29.781931  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:29.781940  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:29.781947  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:29.781954  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:29 GMT
	I1218 23:54:29.781960  881462 round_trippers.go:580]     Audit-Id: beba9bac-b745-4753-a278-b35ecc1555da
	I1218 23:54:29.781967  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:29.781979  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:29.782101  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:30.280051  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:30.280076  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:30.280086  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:30.280093  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:30.282753  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:30.282778  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:30.282787  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:30 GMT
	I1218 23:54:30.282794  881462 round_trippers.go:580]     Audit-Id: c0ad4138-cb0a-416b-acc0-35f91205e51a
	I1218 23:54:30.282800  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:30.282806  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:30.282813  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:30.282819  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:30.283061  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:30.283469  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:30.780235  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:30.780262  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:30.780272  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:30.780279  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:30.782850  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:30.782874  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:30.782882  881462 round_trippers.go:580]     Audit-Id: f42c5aa6-79af-479e-a621-93cd975d715a
	I1218 23:54:30.782889  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:30.782895  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:30.782901  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:30.782912  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:30.782919  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:30 GMT
	I1218 23:54:30.783166  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:31.279280  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:31.279304  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:31.279314  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:31.279325  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:31.281879  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:31.281899  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:31.281908  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:31 GMT
	I1218 23:54:31.281915  881462 round_trippers.go:580]     Audit-Id: e10f5ceb-585e-47e9-8118-89077c6dec75
	I1218 23:54:31.281921  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:31.281927  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:31.281933  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:31.281939  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:31.282173  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:31.780118  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:31.780144  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:31.780154  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:31.780161  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:31.782713  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:31.782741  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:31.782750  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:31.782756  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:31.782762  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:31.782769  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:31 GMT
	I1218 23:54:31.782776  881462 round_trippers.go:580]     Audit-Id: e9f78118-02ed-45d9-bb06-28a17482f46b
	I1218 23:54:31.782782  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:31.783103  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:32.280272  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:32.280299  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:32.280309  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:32.280317  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:32.283137  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:32.283172  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:32.283181  881462 round_trippers.go:580]     Audit-Id: feda38c3-a275-4a38-bd6e-676755485285
	I1218 23:54:32.283187  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:32.283194  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:32.283204  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:32.283214  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:32.283226  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:32 GMT
	I1218 23:54:32.283363  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:32.283819  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:32.779296  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:32.779323  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:32.779338  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:32.779345  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:32.781884  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:32.781909  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:32.781917  881462 round_trippers.go:580]     Audit-Id: 999a2861-91af-4340-98f7-e9699da8f10f
	I1218 23:54:32.781924  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:32.781930  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:32.781936  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:32.781942  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:32.781953  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:32 GMT
	I1218 23:54:32.782344  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:33.279800  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:33.279824  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:33.279834  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:33.279841  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:33.282381  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:33.282402  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:33.282410  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:33 GMT
	I1218 23:54:33.282416  881462 round_trippers.go:580]     Audit-Id: 74335528-1c12-4f0e-8dad-242471c34143
	I1218 23:54:33.282422  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:33.282429  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:33.282435  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:33.282441  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:33.282601  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:33.779467  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:33.779493  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:33.779506  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:33.779514  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:33.782136  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:33.782161  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:33.782170  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:33 GMT
	I1218 23:54:33.782177  881462 round_trippers.go:580]     Audit-Id: 716ce1de-111b-4d28-aa7b-434e0e60230d
	I1218 23:54:33.782183  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:33.782189  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:33.782195  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:33.782202  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:33.782367  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:34.279390  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:34.279437  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:34.279447  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:34.279454  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:34.282164  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:34.282191  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:34.282200  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:34 GMT
	I1218 23:54:34.282206  881462 round_trippers.go:580]     Audit-Id: a716d7ba-c3c1-4a64-99a2-26f7e3b86cd0
	I1218 23:54:34.282212  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:34.282219  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:34.282227  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:34.282234  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:34.282461  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:34.780076  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:34.780099  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:34.780109  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:34.780116  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:34.782636  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:34.782659  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:34.782667  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:34.782674  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:34.782681  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:34.782687  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:34 GMT
	I1218 23:54:34.782693  881462 round_trippers.go:580]     Audit-Id: 6dac146f-26ce-47b9-a3c0-03f08855901e
	I1218 23:54:34.782702  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:34.782944  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:34.783347  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:35.280226  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:35.280249  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:35.280259  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:35.280266  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:35.282677  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:35.282696  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:35.282705  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:35.282711  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:35.282718  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:35 GMT
	I1218 23:54:35.282724  881462 round_trippers.go:580]     Audit-Id: 02afb5f3-8b5b-454a-8d35-6517244f7cba
	I1218 23:54:35.282730  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:35.282736  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:35.282970  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:35.779305  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:35.779332  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:35.779342  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:35.779349  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:35.782136  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:35.782158  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:35.782170  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:35.782177  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:35 GMT
	I1218 23:54:35.782183  881462 round_trippers.go:580]     Audit-Id: d2496894-fd8d-4d5e-9ee5-d0b1e218c974
	I1218 23:54:35.782190  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:35.782196  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:35.782202  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:35.782594  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:36.279193  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:36.279221  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:36.279231  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:36.279239  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:36.281909  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:36.281931  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:36.281940  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:36.281947  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:36.281953  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:36.281959  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:36.281965  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:36 GMT
	I1218 23:54:36.281971  881462 round_trippers.go:580]     Audit-Id: 164d7d18-6285-4ee9-913b-e8bafd5a9b66
	I1218 23:54:36.282086  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:36.779255  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:36.779280  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:36.779291  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:36.779300  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:36.781973  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:36.782000  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:36.782010  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:36.782017  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:36.782024  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:36.782030  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:36.782036  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:36 GMT
	I1218 23:54:36.782043  881462 round_trippers.go:580]     Audit-Id: 83d77af8-9041-40ff-89ee-00b9177b85fd
	I1218 23:54:36.782157  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:37.279213  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:37.279237  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:37.279253  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:37.279260  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:37.281765  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:37.281785  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:37.281793  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:37 GMT
	I1218 23:54:37.281800  881462 round_trippers.go:580]     Audit-Id: cd15cb08-7afd-409e-81ee-a4d95b580a12
	I1218 23:54:37.281806  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:37.281812  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:37.281818  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:37.281824  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:37.282049  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:37.282454  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:37.780215  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:37.780237  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:37.780248  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:37.780255  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:37.782829  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:37.782847  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:37.782856  881462 round_trippers.go:580]     Audit-Id: ce73aa25-0e66-44a0-8456-1d28d7a83808
	I1218 23:54:37.782862  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:37.782869  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:37.782875  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:37.782881  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:37.782888  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:37 GMT
	I1218 23:54:37.783028  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:38.280083  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:38.280106  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:38.280115  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:38.280122  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:38.282646  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:38.282672  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:38.282681  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:38.282689  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:38 GMT
	I1218 23:54:38.282695  881462 round_trippers.go:580]     Audit-Id: 1c31cd66-15fc-4ca3-b02f-eb8b77e41e99
	I1218 23:54:38.282701  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:38.282707  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:38.282714  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:38.282839  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:38.779997  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:38.780020  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:38.780030  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:38.780037  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:38.782521  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:38.782540  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:38.782548  881462 round_trippers.go:580]     Audit-Id: 92de16b4-758c-45f2-877b-b6e5ea1d7d66
	I1218 23:54:38.782555  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:38.782561  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:38.782568  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:38.782574  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:38.782580  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:38 GMT
	I1218 23:54:38.782772  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:39.279306  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:39.279328  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:39.279338  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:39.279345  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:39.281998  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:39.282023  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:39.282032  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:39.282040  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:39 GMT
	I1218 23:54:39.282052  881462 round_trippers.go:580]     Audit-Id: f438c15f-c987-4597-b42c-791a7551d540
	I1218 23:54:39.282058  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:39.282064  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:39.282070  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:39.282173  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:39.282643  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:39.779649  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:39.779668  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:39.779678  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:39.779686  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:39.782285  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:39.782313  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:39.782323  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:39.782330  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:39.782337  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:39 GMT
	I1218 23:54:39.782343  881462 round_trippers.go:580]     Audit-Id: d954e342-3854-4c44-ad95-13ccf94eb00b
	I1218 23:54:39.782349  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:39.782360  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:39.782485  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:40.280011  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:40.280047  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:40.280057  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:40.280064  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:40.282642  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:40.282669  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:40.282679  881462 round_trippers.go:580]     Audit-Id: ae151cf7-b6ae-4428-9aa8-0b9373ba6c82
	I1218 23:54:40.282685  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:40.282693  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:40.282700  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:40.282706  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:40.282712  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:40 GMT
	I1218 23:54:40.282975  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:40.780105  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:40.780127  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:40.780137  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:40.780144  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:40.782597  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:40.782619  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:40.782628  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:40.782634  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:40.782641  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:40 GMT
	I1218 23:54:40.782647  881462 round_trippers.go:580]     Audit-Id: 4ff47166-4915-446f-b51b-772ce53ff17d
	I1218 23:54:40.782653  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:40.782659  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:40.782791  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:41.279993  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:41.280016  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:41.280025  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:41.280032  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:41.282861  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:41.282885  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:41.282894  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:41.282901  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:41.282908  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:41 GMT
	I1218 23:54:41.282915  881462 round_trippers.go:580]     Audit-Id: b92fef80-437b-4e58-9be1-5b5f03c3463a
	I1218 23:54:41.282924  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:41.282930  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:41.283040  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:41.283445  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:41.779319  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:41.779341  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:41.779351  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:41.779358  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:41.782169  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:41.782191  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:41.782200  881462 round_trippers.go:580]     Audit-Id: 30355d0f-f02b-4baf-9f91-958cdc4b9fc2
	I1218 23:54:41.782207  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:41.782215  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:41.782221  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:41.782227  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:41.782234  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:41 GMT
	I1218 23:54:41.782383  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:42.279341  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:42.279365  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:42.279376  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:42.279383  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:42.282624  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:42.282655  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:42.282666  881462 round_trippers.go:580]     Audit-Id: 596057af-56c0-486c-8088-4db2c7307207
	I1218 23:54:42.282674  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:42.282681  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:42.282687  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:42.282694  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:42.282706  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:42 GMT
	I1218 23:54:42.282817  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:42.780048  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:42.780073  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:42.780083  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:42.780090  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:42.782797  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:42.782821  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:42.782829  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:42.782836  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:42.782842  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:42.782848  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:42.782855  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:42 GMT
	I1218 23:54:42.782861  881462 round_trippers.go:580]     Audit-Id: 3bc2d4c9-0d10-4153-b900-391316c376d1
	I1218 23:54:42.783003  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:43.279251  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:43.279277  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:43.279287  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:43.279294  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:43.281937  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:43.281958  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:43.281967  881462 round_trippers.go:580]     Audit-Id: a6e51d03-7993-4bc9-ab35-8ccd1884d9b0
	I1218 23:54:43.281973  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:43.281979  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:43.281985  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:43.281991  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:43.281998  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:43 GMT
	I1218 23:54:43.282094  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:43.779782  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:43.779806  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:43.779815  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:43.779824  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:43.782208  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:43.782227  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:43.782236  881462 round_trippers.go:580]     Audit-Id: 5924e5be-bf52-4b81-890d-a43a1a63dd3d
	I1218 23:54:43.782242  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:43.782253  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:43.782259  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:43.782265  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:43.782271  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:43 GMT
	I1218 23:54:43.782406  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:43.782839  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:44.279497  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:44.279521  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:44.279532  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:44.279539  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:44.282104  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:44.282125  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:44.282133  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:44.282150  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:44.282156  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:44 GMT
	I1218 23:54:44.282163  881462 round_trippers.go:580]     Audit-Id: 18508046-f280-4f1b-8cd6-a9b1a9fc0e0f
	I1218 23:54:44.282169  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:44.282175  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:44.282308  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:44.780222  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:44.780247  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:44.780257  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:44.780265  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:44.782825  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:44.782849  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:44.782857  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:44 GMT
	I1218 23:54:44.782864  881462 round_trippers.go:580]     Audit-Id: 5179bb9f-601a-4377-9c08-affc8b8ad238
	I1218 23:54:44.782871  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:44.782877  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:44.782886  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:44.782896  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:44.783061  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:45.279356  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:45.279385  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:45.279397  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:45.279404  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:45.282794  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:45.282854  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:45.282888  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:45.282922  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:45 GMT
	I1218 23:54:45.282951  881462 round_trippers.go:580]     Audit-Id: dd559a8e-fd0b-4e39-aace-69804904684a
	I1218 23:54:45.282959  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:45.282966  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:45.283066  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:45.283738  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:45.780136  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:45.780159  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:45.780173  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:45.780186  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:45.782647  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:45.782672  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:45.782681  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:45.782687  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:45.782694  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:45 GMT
	I1218 23:54:45.782700  881462 round_trippers.go:580]     Audit-Id: 19945063-da17-46ef-a39c-a452d5dd9515
	I1218 23:54:45.782710  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:45.782716  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:45.783009  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:45.783428  881462 node_ready.go:58] node "multinode-320272" has status "Ready":"False"
	I1218 23:54:46.279618  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:46.279642  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:46.279652  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:46.279661  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:46.282167  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:46.282191  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:46.282199  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:46.282206  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:46.282212  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:46.282219  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:46.282226  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:46 GMT
	I1218 23:54:46.282237  881462 round_trippers.go:580]     Audit-Id: 46e5caf8-c028-4422-9f4c-19113a0f4223
	I1218 23:54:46.282439  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:46.779864  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:46.779889  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:46.779898  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:46.779906  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:46.782491  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:46.782512  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:46.782520  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:46.782526  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:46.782532  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:46 GMT
	I1218 23:54:46.782539  881462 round_trippers.go:580]     Audit-Id: 1407b0ff-273e-47f9-827e-4a9974a57952
	I1218 23:54:46.782545  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:46.782551  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:46.782674  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:47.279852  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:47.279874  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:47.279884  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:47.279891  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:47.282609  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:47.282636  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:47.282663  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:47.282680  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:47.282687  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:47.282693  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:47.282705  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:47 GMT
	I1218 23:54:47.282711  881462 round_trippers.go:580]     Audit-Id: 144f45fe-e401-4d5f-a23e-70c4459654f3
	I1218 23:54:47.282973  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:47.779570  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:47.779595  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:47.779605  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:47.779612  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:47.782234  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:47.782256  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:47.782264  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:47.782271  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:47.782277  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:47.782283  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:47 GMT
	I1218 23:54:47.782291  881462 round_trippers.go:580]     Audit-Id: df27405b-32c0-4572-b464-40ed978ac499
	I1218 23:54:47.782297  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:47.782778  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"342","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1218 23:54:48.279291  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:48.279312  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.279322  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.279329  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.282777  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:48.282798  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.282806  881462 round_trippers.go:580]     Audit-Id: 0a8a59dd-e3e1-4977-9481-913db9bfb4b9
	I1218 23:54:48.282813  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.282819  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.282825  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.282832  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.282838  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.285344  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:48.285743  881462 node_ready.go:49] node "multinode-320272" has status "Ready":"True"
	I1218 23:54:48.285756  881462 node_ready.go:38] duration metric: took 32.006692813s waiting for node "multinode-320272" to be "Ready" ...
	I1218 23:54:48.285766  881462 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:54:48.285865  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:54:48.285872  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.285881  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.285888  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.290987  881462 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 23:54:48.291008  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.291016  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.291023  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.291029  881462 round_trippers.go:580]     Audit-Id: 638f3ebb-b48d-47ad-aff5-9179f8c1daff
	I1218 23:54:48.291035  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.291041  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.291047  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.296535  881462 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"443","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1218 23:54:48.300544  881462 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:48.300718  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fwqn2
	I1218 23:54:48.300745  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.300765  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.300785  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.303716  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:48.303733  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.303741  881462 round_trippers.go:580]     Audit-Id: ff97f3e7-04ca-49a6-add9-7958349ef4f9
	I1218 23:54:48.303747  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.303753  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.303759  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.303766  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.303772  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.304295  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"443","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1218 23:54:48.304808  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:48.304818  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.304827  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.304833  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.307764  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:48.307783  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.307791  881462 round_trippers.go:580]     Audit-Id: 5b0650f4-a282-4546-9811-924f6ba839f3
	I1218 23:54:48.307797  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.307803  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.307809  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.307815  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.307822  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.308380  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:48.801141  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fwqn2
	I1218 23:54:48.801167  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.801177  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.801185  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.803764  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:48.803827  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.803840  881462 round_trippers.go:580]     Audit-Id: 84205480-afdd-4074-8efe-32b8f87c18ba
	I1218 23:54:48.803852  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.803859  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.803876  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.803884  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.803894  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.804030  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"443","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1218 23:54:48.804565  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:48.804583  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:48.804592  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:48.804599  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:48.806963  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:48.806981  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:48.806989  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:48.806996  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:48.807002  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:48.807008  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:48.807014  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:48 GMT
	I1218 23:54:48.807021  881462 round_trippers.go:580]     Audit-Id: c6f69751-da50-43ea-9dde-aa988e1d08a3
	I1218 23:54:48.807147  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.300845  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fwqn2
	I1218 23:54:49.300869  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.300879  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.300889  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.303403  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.303471  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.303486  881462 round_trippers.go:580]     Audit-Id: bbedd390-2944-401a-8409-1c1ea7f6ed7c
	I1218 23:54:49.303494  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.303503  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.303510  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.303520  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.303527  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.303659  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"456","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1218 23:54:49.304273  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.304289  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.304297  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.304303  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.306619  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.306640  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.306649  881462 round_trippers.go:580]     Audit-Id: f51a1549-972a-4453-8840-f7109456cb5d
	I1218 23:54:49.306656  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.306662  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.306672  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.306683  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.306689  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.306812  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.307185  881462 pod_ready.go:92] pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.307204  881462 pod_ready.go:81] duration metric: took 1.006586078s waiting for pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.307228  881462 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.307287  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-320272
	I1218 23:54:49.307297  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.307304  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.307318  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.309520  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.309544  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.309552  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.309558  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.309565  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.309576  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.309587  881462 round_trippers.go:580]     Audit-Id: a7f0c700-5004-4474-afce-15edd3cc2349
	I1218 23:54:49.309593  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.309734  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-320272","namespace":"kube-system","uid":"e8faa391-587b-49be-b29e-11f12f8c02bc","resourceVersion":"425","creationTimestamp":"2023-12-18T23:54:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f1b69f0c003b97673c6630ae2cd61703","kubernetes.io/config.mirror":"f1b69f0c003b97673c6630ae2cd61703","kubernetes.io/config.seen":"2023-12-18T23:53:55.188275540Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1218 23:54:49.310176  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.310193  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.310200  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.310209  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.312279  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.312299  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.312308  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.312314  881462 round_trippers.go:580]     Audit-Id: bbadf4e5-635f-4c3e-b338-7841ee5f0ba3
	I1218 23:54:49.312321  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.312327  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.312333  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.312342  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.312641  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.313025  881462 pod_ready.go:92] pod "etcd-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.313042  881462 pod_ready.go:81] duration metric: took 5.804703ms waiting for pod "etcd-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.313056  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.313122  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-320272
	I1218 23:54:49.313132  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.313140  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.313147  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.315448  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.315509  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.315531  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.315567  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.315591  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.315610  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.315642  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.315662  881462 round_trippers.go:580]     Audit-Id: c9d67e69-a2d0-4ffe-b50a-1c34a4b18609
	I1218 23:54:49.315800  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-320272","namespace":"kube-system","uid":"b160ebe3-121c-420a-a9c1-0315f470178c","resourceVersion":"426","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"509950bfc349e4a24d79b32d45049002","kubernetes.io/config.mirror":"509950bfc349e4a24d79b32d45049002","kubernetes.io/config.seen":"2023-12-18T23:54:02.715849151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1218 23:54:49.316384  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.316401  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.316409  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.316418  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.318595  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.318614  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.318623  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.318630  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.318636  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.318642  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.318648  881462 round_trippers.go:580]     Audit-Id: fb1a4f5c-99dd-4f2d-ad40-10430ac2f098
	I1218 23:54:49.318654  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.318752  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.319151  881462 pod_ready.go:92] pod "kube-apiserver-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.319164  881462 pod_ready.go:81] duration metric: took 6.098519ms waiting for pod "kube-apiserver-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.319174  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.319238  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-320272
	I1218 23:54:49.319242  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.319250  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.319256  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.321803  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.321823  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.321831  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.321838  881462 round_trippers.go:580]     Audit-Id: 0505e44b-2eb5-4350-ae60-d5260f897ecf
	I1218 23:54:49.321844  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.321850  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.321856  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.321862  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.322091  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-320272","namespace":"kube-system","uid":"9b5d1951-61b5-4237-9e19-dcf9fd90a729","resourceVersion":"427","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa4353254f730a64c4fb2fddfe7d9122","kubernetes.io/config.mirror":"aa4353254f730a64c4fb2fddfe7d9122","kubernetes.io/config.seen":"2023-12-18T23:54:02.715854968Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1218 23:54:49.322600  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.322617  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.322625  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.322632  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.324917  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.324938  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.324947  881462 round_trippers.go:580]     Audit-Id: 80df98e4-14d5-4d77-a43f-3791532cf2a0
	I1218 23:54:49.324954  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.324960  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.324967  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.324977  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.324986  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.325315  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.325725  881462 pod_ready.go:92] pod "kube-controller-manager-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.325750  881462 pod_ready.go:81] duration metric: took 6.562171ms waiting for pod "kube-controller-manager-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.325765  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54h89" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.325843  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54h89
	I1218 23:54:49.325860  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.325868  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.325875  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.328269  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.328325  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.328347  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.328366  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.328400  881462 round_trippers.go:580]     Audit-Id: ae9a507b-5701-47ad-98a9-9a873ba7a61a
	I1218 23:54:49.328423  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.328437  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.328443  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.328578  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-54h89","generateName":"kube-proxy-","namespace":"kube-system","uid":"49bbd70d-f3b8-438d-8a7a-ad0a46e872b0","resourceVersion":"418","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"db4d7c1a-b61e-4603-bf4e-b58f24456242","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db4d7c1a-b61e-4603-bf4e-b58f24456242\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1218 23:54:49.480328  881462 request.go:629] Waited for 151.299557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.480412  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.480424  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.480433  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.480440  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.483705  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:49.483734  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.483756  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.483763  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.483780  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.483793  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.483805  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.483823  881462 round_trippers.go:580]     Audit-Id: f6a95e98-79a2-4e88-a724-72f27e094665
	I1218 23:54:49.484350  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.484778  881462 pod_ready.go:92] pod "kube-proxy-54h89" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.484796  881462 pod_ready.go:81] duration metric: took 159.014385ms waiting for pod "kube-proxy-54h89" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.484807  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.680227  881462 request.go:629] Waited for 195.354297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-320272
	I1218 23:54:49.680347  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-320272
	I1218 23:54:49.680363  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.680385  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.680409  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.682992  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.683066  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.683076  881462 round_trippers.go:580]     Audit-Id: 6c32b900-511b-4ecd-9eb7-9695a6e0372e
	I1218 23:54:49.683090  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.683097  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.683108  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.683114  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.683127  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.683233  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-320272","namespace":"kube-system","uid":"135e8712-3a4a-48ca-aede-e77448583234","resourceVersion":"424","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"194b5fdf4303a130d34b89fd0d0d02aa","kubernetes.io/config.mirror":"194b5fdf4303a130d34b89fd0d0d02aa","kubernetes.io/config.seen":"2023-12-18T23:54:02.715856092Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1218 23:54:49.880017  881462 request.go:629] Waited for 196.349974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.880105  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:54:49.880116  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.880126  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.880133  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.882679  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:49.882704  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.882714  881462 round_trippers.go:580]     Audit-Id: 2c98cc6b-5aae-46cb-abba-a7bd01764f2c
	I1218 23:54:49.882721  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.882729  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.882736  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.882753  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.882763  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.883094  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:54:49.883535  881462 pod_ready.go:92] pod "kube-scheduler-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:54:49.883555  881462 pod_ready.go:81] duration metric: took 398.741328ms waiting for pod "kube-scheduler-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:54:49.883570  881462 pod_ready.go:38] duration metric: took 1.597781604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:54:49.883587  881462 api_server.go:52] waiting for apiserver process to appear ...
	I1218 23:54:49.883651  881462 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:54:49.895068  881462 command_runner.go:130] > 1227
	I1218 23:54:49.896352  881462 api_server.go:72] duration metric: took 34.129072064s to wait for apiserver process to appear ...
	I1218 23:54:49.896372  881462 api_server.go:88] waiting for apiserver healthz status ...
	I1218 23:54:49.896391  881462 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1218 23:54:49.905294  881462 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1218 23:54:49.905382  881462 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1218 23:54:49.905394  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:49.905403  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:49.905411  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:49.906615  881462 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 23:54:49.906634  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:49.906642  881462 round_trippers.go:580]     Content-Length: 264
	I1218 23:54:49.906648  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:49 GMT
	I1218 23:54:49.906656  881462 round_trippers.go:580]     Audit-Id: 5006a666-4713-4406-a19e-4c833307e928
	I1218 23:54:49.906705  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:49.906717  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:49.906724  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:49.906731  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:49.906748  881462 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1218 23:54:49.906832  881462 api_server.go:141] control plane version: v1.28.4
	I1218 23:54:49.906850  881462 api_server.go:131] duration metric: took 10.473315ms to wait for apiserver health ...
	I1218 23:54:49.906858  881462 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 23:54:50.080245  881462 request.go:629] Waited for 173.320896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:54:50.080368  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:54:50.080396  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:50.080417  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:50.080431  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:50.084161  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:50.084189  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:50.084198  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:50.084205  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:50.084219  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:50 GMT
	I1218 23:54:50.084225  881462 round_trippers.go:580]     Audit-Id: 323a4e5a-85b1-42a0-8430-cbbda84eaefe
	I1218 23:54:50.084242  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:50.084253  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:50.084797  881462 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"456","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1218 23:54:50.087268  881462 system_pods.go:59] 8 kube-system pods found
	I1218 23:54:50.087301  881462 system_pods.go:61] "coredns-5dd5756b68-fwqn2" [9a076607-92d0-42d5-a2e5-95580b423c69] Running
	I1218 23:54:50.087309  881462 system_pods.go:61] "etcd-multinode-320272" [e8faa391-587b-49be-b29e-11f12f8c02bc] Running
	I1218 23:54:50.087315  881462 system_pods.go:61] "kindnet-6vp9q" [d2510149-5fa2-49db-ad53-833f8c18ed44] Running
	I1218 23:54:50.087321  881462 system_pods.go:61] "kube-apiserver-multinode-320272" [b160ebe3-121c-420a-a9c1-0315f470178c] Running
	I1218 23:54:50.087327  881462 system_pods.go:61] "kube-controller-manager-multinode-320272" [9b5d1951-61b5-4237-9e19-dcf9fd90a729] Running
	I1218 23:54:50.087331  881462 system_pods.go:61] "kube-proxy-54h89" [49bbd70d-f3b8-438d-8a7a-ad0a46e872b0] Running
	I1218 23:54:50.087344  881462 system_pods.go:61] "kube-scheduler-multinode-320272" [135e8712-3a4a-48ca-aede-e77448583234] Running
	I1218 23:54:50.087352  881462 system_pods.go:61] "storage-provisioner" [aaed796f-c658-46b9-8222-ad7bdb3e9f7d] Running
	I1218 23:54:50.087359  881462 system_pods.go:74] duration metric: took 180.495304ms to wait for pod list to return data ...
	I1218 23:54:50.087374  881462 default_sa.go:34] waiting for default service account to be created ...
	I1218 23:54:50.279821  881462 request.go:629] Waited for 192.349098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1218 23:54:50.279896  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1218 23:54:50.279906  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:50.279915  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:50.279927  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:50.282463  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:50.282486  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:50.282496  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:50.282502  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:50.282525  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:50.282536  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:50.282543  881462 round_trippers.go:580]     Content-Length: 261
	I1218 23:54:50.282553  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:50 GMT
	I1218 23:54:50.282560  881462 round_trippers.go:580]     Audit-Id: 19724a7d-0bb4-4568-97ab-5991fbeb3040
	I1218 23:54:50.282580  881462 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a7ce1302-50c6-4414-861c-4c8a3f7058f5","resourceVersion":"332","creationTimestamp":"2023-12-18T23:54:14Z"}}]}
	I1218 23:54:50.282788  881462 default_sa.go:45] found service account: "default"
	I1218 23:54:50.282809  881462 default_sa.go:55] duration metric: took 195.428348ms for default service account to be created ...
	I1218 23:54:50.282819  881462 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 23:54:50.480238  881462 request.go:629] Waited for 197.333811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:54:50.480311  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:54:50.480323  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:50.480338  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:50.480348  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:50.484058  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:54:50.484084  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:50.484093  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:50.484100  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:50.484106  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:50 GMT
	I1218 23:54:50.484112  881462 round_trippers.go:580]     Audit-Id: 5581f893-e212-45bf-a664-fc6902dabc56
	I1218 23:54:50.484118  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:50.484124  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:50.484643  881462 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"456","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1218 23:54:50.486998  881462 system_pods.go:86] 8 kube-system pods found
	I1218 23:54:50.487022  881462 system_pods.go:89] "coredns-5dd5756b68-fwqn2" [9a076607-92d0-42d5-a2e5-95580b423c69] Running
	I1218 23:54:50.487029  881462 system_pods.go:89] "etcd-multinode-320272" [e8faa391-587b-49be-b29e-11f12f8c02bc] Running
	I1218 23:54:50.487034  881462 system_pods.go:89] "kindnet-6vp9q" [d2510149-5fa2-49db-ad53-833f8c18ed44] Running
	I1218 23:54:50.487040  881462 system_pods.go:89] "kube-apiserver-multinode-320272" [b160ebe3-121c-420a-a9c1-0315f470178c] Running
	I1218 23:54:50.487046  881462 system_pods.go:89] "kube-controller-manager-multinode-320272" [9b5d1951-61b5-4237-9e19-dcf9fd90a729] Running
	I1218 23:54:50.487051  881462 system_pods.go:89] "kube-proxy-54h89" [49bbd70d-f3b8-438d-8a7a-ad0a46e872b0] Running
	I1218 23:54:50.487056  881462 system_pods.go:89] "kube-scheduler-multinode-320272" [135e8712-3a4a-48ca-aede-e77448583234] Running
	I1218 23:54:50.487061  881462 system_pods.go:89] "storage-provisioner" [aaed796f-c658-46b9-8222-ad7bdb3e9f7d] Running
	I1218 23:54:50.487069  881462 system_pods.go:126] duration metric: took 204.238216ms to wait for k8s-apps to be running ...
	I1218 23:54:50.487079  881462 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:54:50.487136  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:54:50.500755  881462 system_svc.go:56] duration metric: took 13.665236ms WaitForService to wait for kubelet.
	I1218 23:54:50.500781  881462 kubeadm.go:581] duration metric: took 34.733506201s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:54:50.500809  881462 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:54:50.680227  881462 request.go:629] Waited for 179.323538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1218 23:54:50.680293  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1218 23:54:50.680317  881462 round_trippers.go:469] Request Headers:
	I1218 23:54:50.680331  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:54:50.680344  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:54:50.683056  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:54:50.683084  881462 round_trippers.go:577] Response Headers:
	I1218 23:54:50.683101  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:54:50.683128  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:54:50 GMT
	I1218 23:54:50.683142  881462 round_trippers.go:580]     Audit-Id: 26ba2c09-73b1-4e51-9e9e-ecd30bcebde4
	I1218 23:54:50.683149  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:54:50.683156  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:54:50.683193  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:54:50.683308  881462 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1218 23:54:50.683793  881462 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:54:50.683821  881462 node_conditions.go:123] node cpu capacity is 2
	I1218 23:54:50.683832  881462 node_conditions.go:105] duration metric: took 183.013375ms to run NodePressure ...
	I1218 23:54:50.683843  881462 start.go:228] waiting for startup goroutines ...
	I1218 23:54:50.683852  881462 start.go:233] waiting for cluster config update ...
	I1218 23:54:50.683862  881462 start.go:242] writing updated cluster config ...
	I1218 23:54:50.686652  881462 out.go:177] 
	I1218 23:54:50.688989  881462 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:54:50.689082  881462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json ...
	I1218 23:54:50.691212  881462 out.go:177] * Starting worker node multinode-320272-m02 in cluster multinode-320272
	I1218 23:54:50.693418  881462 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:54:50.695076  881462 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:54:50.696738  881462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:54:50.696773  881462 cache.go:56] Caching tarball of preloaded images
	I1218 23:54:50.696830  881462 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:54:50.696916  881462 preload.go:174] Found /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1218 23:54:50.696935  881462 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1218 23:54:50.697062  881462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json ...
	I1218 23:54:50.715428  881462 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1218 23:54:50.715450  881462 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1218 23:54:50.715476  881462 cache.go:194] Successfully downloaded all kic artifacts
	I1218 23:54:50.715505  881462 start.go:365] acquiring machines lock for multinode-320272-m02: {Name:mk2affbc11391610fac186c7aa36c67ae24ed1a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 23:54:50.715634  881462 start.go:369] acquired machines lock for "multinode-320272-m02" in 109.85µs
	I1218 23:54:50.715660  881462 start.go:93] Provisioning new machine with config: &{Name:multinode-320272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1218 23:54:50.715741  881462 start.go:125] createHost starting for "m02" (driver="docker")
	I1218 23:54:50.718223  881462 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1218 23:54:50.718356  881462 start.go:159] libmachine.API.Create for "multinode-320272" (driver="docker")
	I1218 23:54:50.718381  881462 client.go:168] LocalClient.Create starting
	I1218 23:54:50.718451  881462 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem
	I1218 23:54:50.718486  881462 main.go:141] libmachine: Decoding PEM data...
	I1218 23:54:50.718502  881462 main.go:141] libmachine: Parsing certificate...
	I1218 23:54:50.718575  881462 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem
	I1218 23:54:50.718592  881462 main.go:141] libmachine: Decoding PEM data...
	I1218 23:54:50.718602  881462 main.go:141] libmachine: Parsing certificate...
	I1218 23:54:50.718841  881462 cli_runner.go:164] Run: docker network inspect multinode-320272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:54:50.738807  881462 network_create.go:77] Found existing network {name:multinode-320272 subnet:0x4002f5a570 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1218 23:54:50.738858  881462 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-320272-m02" container
	I1218 23:54:50.738937  881462 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 23:54:50.756535  881462 cli_runner.go:164] Run: docker volume create multinode-320272-m02 --label name.minikube.sigs.k8s.io=multinode-320272-m02 --label created_by.minikube.sigs.k8s.io=true
	I1218 23:54:50.778579  881462 oci.go:103] Successfully created a docker volume multinode-320272-m02
	I1218 23:54:50.778671  881462 cli_runner.go:164] Run: docker run --rm --name multinode-320272-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-320272-m02 --entrypoint /usr/bin/test -v multinode-320272-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 23:54:51.360777  881462 oci.go:107] Successfully prepared a docker volume multinode-320272-m02
	I1218 23:54:51.360811  881462 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:54:51.360831  881462 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 23:54:51.360918  881462 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-320272-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 23:54:55.784131  881462 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-320272-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.423171697s)
	I1218 23:54:55.784170  881462 kic.go:203] duration metric: took 4.423337 seconds to extract preloaded images to volume
	W1218 23:54:55.784307  881462 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 23:54:55.784422  881462 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 23:54:55.855843  881462 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-320272-m02 --name multinode-320272-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-320272-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-320272-m02 --network multinode-320272 --ip 192.168.58.3 --volume multinode-320272-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 23:54:56.252364  881462 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Running}}
	I1218 23:54:56.288114  881462 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Status}}
	I1218 23:54:56.313664  881462 cli_runner.go:164] Run: docker exec multinode-320272-m02 stat /var/lib/dpkg/alternatives/iptables
	I1218 23:54:56.410265  881462 oci.go:144] the created container "multinode-320272-m02" has a running status.
	I1218 23:54:56.410291  881462 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa...
	I1218 23:54:57.152802  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1218 23:54:57.152931  881462 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 23:54:57.184319  881462 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Status}}
	I1218 23:54:57.212897  881462 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 23:54:57.212916  881462 kic_runner.go:114] Args: [docker exec --privileged multinode-320272-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 23:54:57.290506  881462 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Status}}
	I1218 23:54:57.312809  881462 machine.go:88] provisioning docker machine ...
	I1218 23:54:57.312838  881462 ubuntu.go:169] provisioning hostname "multinode-320272-m02"
	I1218 23:54:57.313000  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:57.344269  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:54:57.344699  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1218 23:54:57.344713  881462 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-320272-m02 && echo "multinode-320272-m02" | sudo tee /etc/hostname
	I1218 23:54:57.550683  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-320272-m02
	
	I1218 23:54:57.550836  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:57.580382  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:54:57.580783  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1218 23:54:57.580802  881462 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-320272-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-320272-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-320272-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 23:54:57.729265  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 23:54:57.729349  881462 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1218 23:54:57.729380  881462 ubuntu.go:177] setting up certificates
	I1218 23:54:57.729416  881462 provision.go:83] configureAuth start
	I1218 23:54:57.729506  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272-m02
	I1218 23:54:57.753659  881462 provision.go:138] copyHostCerts
	I1218 23:54:57.753709  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:54:57.753744  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1218 23:54:57.753751  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1218 23:54:57.753828  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1218 23:54:57.753902  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:54:57.753920  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1218 23:54:57.753924  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1218 23:54:57.753953  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1218 23:54:57.753991  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:54:57.754007  881462 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1218 23:54:57.754011  881462 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1218 23:54:57.754032  881462 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1218 23:54:57.754072  881462 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.multinode-320272-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-320272-m02]
	I1218 23:54:57.959821  881462 provision.go:172] copyRemoteCerts
	I1218 23:54:57.959891  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 23:54:57.959936  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:57.980520  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:54:58.092028  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 23:54:58.092104  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 23:54:58.124772  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 23:54:58.124837  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1218 23:54:58.153640  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 23:54:58.153702  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 23:54:58.183253  881462 provision.go:86] duration metric: configureAuth took 453.807828ms
	I1218 23:54:58.183282  881462 ubuntu.go:193] setting minikube options for container-runtime
	I1218 23:54:58.183479  881462 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:54:58.183598  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:58.202531  881462 main.go:141] libmachine: Using SSH client type: native
	I1218 23:54:58.202980  881462 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33521 <nil> <nil>}
	I1218 23:54:58.203003  881462 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1218 23:54:58.473615  881462 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1218 23:54:58.473642  881462 machine.go:91] provisioned docker machine in 1.160813884s
	I1218 23:54:58.473652  881462 client.go:171] LocalClient.Create took 7.755265007s
	I1218 23:54:58.473665  881462 start.go:167] duration metric: libmachine.API.Create for "multinode-320272" took 7.755309577s
	I1218 23:54:58.473672  881462 start.go:300] post-start starting for "multinode-320272-m02" (driver="docker")
	I1218 23:54:58.473682  881462 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 23:54:58.473745  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 23:54:58.473792  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:58.492529  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:54:58.599358  881462 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 23:54:58.603682  881462 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1218 23:54:58.603708  881462 command_runner.go:130] > NAME="Ubuntu"
	I1218 23:54:58.603715  881462 command_runner.go:130] > VERSION_ID="22.04"
	I1218 23:54:58.603722  881462 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1218 23:54:58.603728  881462 command_runner.go:130] > VERSION_CODENAME=jammy
	I1218 23:54:58.603732  881462 command_runner.go:130] > ID=ubuntu
	I1218 23:54:58.603737  881462 command_runner.go:130] > ID_LIKE=debian
	I1218 23:54:58.603742  881462 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1218 23:54:58.603748  881462 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1218 23:54:58.603756  881462 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1218 23:54:58.603764  881462 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1218 23:54:58.603769  881462 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1218 23:54:58.603833  881462 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 23:54:58.603860  881462 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 23:54:58.603870  881462 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 23:54:58.603878  881462 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 23:54:58.603888  881462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1218 23:54:58.603972  881462 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1218 23:54:58.604058  881462 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1218 23:54:58.604070  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /etc/ssl/certs/8173782.pem
	I1218 23:54:58.604169  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 23:54:58.614511  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:54:58.644798  881462 start.go:303] post-start completed in 171.110371ms
	I1218 23:54:58.645216  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272-m02
	I1218 23:54:58.663811  881462 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/config.json ...
	I1218 23:54:58.664256  881462 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:54:58.664306  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:58.686375  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:54:58.786736  881462 command_runner.go:130] > 18%!
	(MISSING)I1218 23:54:58.786808  881462 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 23:54:58.792292  881462 command_runner.go:130] > 160G
	I1218 23:54:58.792848  881462 start.go:128] duration metric: createHost completed in 8.077093568s
	I1218 23:54:58.792867  881462 start.go:83] releasing machines lock for "multinode-320272-m02", held for 8.077223857s
	I1218 23:54:58.792941  881462 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272-m02
	I1218 23:54:58.813422  881462 out.go:177] * Found network options:
	I1218 23:54:58.814999  881462 out.go:177]   - NO_PROXY=192.168.58.2
	W1218 23:54:58.817787  881462 proxy.go:119] fail to check proxy env: Error ip not in block
	W1218 23:54:58.817839  881462 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 23:54:58.817923  881462 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1218 23:54:58.817971  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:58.817975  881462 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 23:54:58.818033  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:54:58.837373  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:54:58.838612  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:54:59.125039  881462 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 23:54:59.125111  881462 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 23:54:59.130609  881462 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1218 23:54:59.130638  881462 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1218 23:54:59.130647  881462 command_runner.go:130] > Device: b3h/179d	Inode: 3636410     Links: 1
	I1218 23:54:59.130655  881462 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:54:59.130662  881462 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1218 23:54:59.130669  881462 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1218 23:54:59.130675  881462 command_runner.go:130] > Change: 2023-12-18 23:32:03.407141962 +0000
	I1218 23:54:59.130684  881462 command_runner.go:130] >  Birth: 2023-12-18 23:32:03.407141962 +0000
	I1218 23:54:59.130949  881462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:54:59.155651  881462 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1218 23:54:59.155735  881462 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 23:54:59.192749  881462 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1218 23:54:59.192785  881462 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 23:54:59.192793  881462 start.go:475] detecting cgroup driver to use...
	I1218 23:54:59.192823  881462 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 23:54:59.192872  881462 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 23:54:59.212168  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 23:54:59.225860  881462 docker.go:203] disabling cri-docker service (if available) ...
	I1218 23:54:59.225972  881462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1218 23:54:59.241071  881462 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1218 23:54:59.257607  881462 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1218 23:54:59.359818  881462 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1218 23:54:59.376328  881462 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1218 23:54:59.464579  881462 docker.go:219] disabling docker service ...
	I1218 23:54:59.464704  881462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1218 23:54:59.487288  881462 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1218 23:54:59.501424  881462 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1218 23:54:59.517906  881462 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1218 23:54:59.603229  881462 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1218 23:54:59.702295  881462 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1218 23:54:59.702372  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1218 23:54:59.717007  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 23:54:59.737099  881462 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1218 23:54:59.738525  881462 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1218 23:54:59.738591  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:54:59.750980  881462 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1218 23:54:59.751102  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:54:59.764031  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:54:59.776540  881462 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1218 23:54:59.788426  881462 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 23:54:59.800707  881462 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 23:54:59.809987  881462 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 23:54:59.811292  881462 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 23:54:59.822217  881462 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 23:54:59.926182  881462 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1218 23:55:00.225006  881462 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I1218 23:55:00.225101  881462 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1218 23:55:00.250418  881462 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1218 23:55:00.250511  881462 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 23:55:00.250534  881462 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1218 23:55:00.250571  881462 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:55:00.250605  881462 command_runner.go:130] > Access: 2023-12-18 23:55:00.183776113 +0000
	I1218 23:55:00.250632  881462 command_runner.go:130] > Modify: 2023-12-18 23:55:00.183776113 +0000
	I1218 23:55:00.250655  881462 command_runner.go:130] > Change: 2023-12-18 23:55:00.183776113 +0000
	I1218 23:55:00.250685  881462 command_runner.go:130] >  Birth: -
	I1218 23:55:00.250899  881462 start.go:543] Will wait 60s for crictl version
	I1218 23:55:00.251055  881462 ssh_runner.go:195] Run: which crictl
	I1218 23:55:00.270228  881462 command_runner.go:130] > /usr/bin/crictl
	I1218 23:55:00.271283  881462 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 23:55:00.391094  881462 command_runner.go:130] > Version:  0.1.0
	I1218 23:55:00.391539  881462 command_runner.go:130] > RuntimeName:  cri-o
	I1218 23:55:00.391598  881462 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1218 23:55:00.391623  881462 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 23:55:00.399465  881462 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1218 23:55:00.399672  881462 ssh_runner.go:195] Run: crio --version
	I1218 23:55:00.452682  881462 command_runner.go:130] > crio version 1.24.6
	I1218 23:55:00.452775  881462 command_runner.go:130] > Version:          1.24.6
	I1218 23:55:00.452801  881462 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1218 23:55:00.452842  881462 command_runner.go:130] > GitTreeState:     clean
	I1218 23:55:00.452867  881462 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1218 23:55:00.452888  881462 command_runner.go:130] > GoVersion:        go1.18.2
	I1218 23:55:00.452920  881462 command_runner.go:130] > Compiler:         gc
	I1218 23:55:00.452942  881462 command_runner.go:130] > Platform:         linux/arm64
	I1218 23:55:00.452961  881462 command_runner.go:130] > Linkmode:         dynamic
	I1218 23:55:00.453004  881462 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1218 23:55:00.453030  881462 command_runner.go:130] > SeccompEnabled:   true
	I1218 23:55:00.453050  881462 command_runner.go:130] > AppArmorEnabled:  false
	I1218 23:55:00.457273  881462 ssh_runner.go:195] Run: crio --version
	I1218 23:55:00.505807  881462 command_runner.go:130] > crio version 1.24.6
	I1218 23:55:00.505831  881462 command_runner.go:130] > Version:          1.24.6
	I1218 23:55:00.505841  881462 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1218 23:55:00.505847  881462 command_runner.go:130] > GitTreeState:     clean
	I1218 23:55:00.505854  881462 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1218 23:55:00.505860  881462 command_runner.go:130] > GoVersion:        go1.18.2
	I1218 23:55:00.505866  881462 command_runner.go:130] > Compiler:         gc
	I1218 23:55:00.505872  881462 command_runner.go:130] > Platform:         linux/arm64
	I1218 23:55:00.505878  881462 command_runner.go:130] > Linkmode:         dynamic
	I1218 23:55:00.505894  881462 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1218 23:55:00.505901  881462 command_runner.go:130] > SeccompEnabled:   true
	I1218 23:55:00.505908  881462 command_runner.go:130] > AppArmorEnabled:  false
	I1218 23:55:00.511731  881462 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1218 23:55:00.513768  881462 out.go:177]   - env NO_PROXY=192.168.58.2
	I1218 23:55:00.515522  881462 cli_runner.go:164] Run: docker network inspect multinode-320272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 23:55:00.536238  881462 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1218 23:55:00.541566  881462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:55:00.557174  881462 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272 for IP: 192.168.58.3
	I1218 23:55:00.557208  881462 certs.go:190] acquiring lock for shared ca certs: {Name:mkb7306ae237ed30250289faa05e9a8d3ae56985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 23:55:00.557407  881462 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key
	I1218 23:55:00.557450  881462 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key
	I1218 23:55:00.557461  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 23:55:00.557477  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 23:55:00.557488  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 23:55:00.557502  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 23:55:00.557565  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem (1338 bytes)
	W1218 23:55:00.557595  881462 certs.go:433] ignoring /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378_empty.pem, impossibly tiny 0 bytes
	I1218 23:55:00.557604  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 23:55:00.557630  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem (1078 bytes)
	I1218 23:55:00.557655  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem (1123 bytes)
	I1218 23:55:00.557683  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem (1679 bytes)
	I1218 23:55:00.557737  881462 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem (1708 bytes)
	I1218 23:55:00.557770  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> /usr/share/ca-certificates/8173782.pem
	I1218 23:55:00.557785  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:55:00.557797  881462 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem -> /usr/share/ca-certificates/817378.pem
	I1218 23:55:00.558251  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 23:55:00.592994  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1218 23:55:00.625946  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 23:55:00.657883  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 23:55:00.690578  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /usr/share/ca-certificates/8173782.pem (1708 bytes)
	I1218 23:55:00.722108  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 23:55:00.753699  881462 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/817378.pem --> /usr/share/ca-certificates/817378.pem (1338 bytes)
	I1218 23:55:00.784019  881462 ssh_runner.go:195] Run: openssl version
	I1218 23:55:00.791035  881462 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1218 23:55:00.791342  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8173782.pem && ln -fs /usr/share/ca-certificates/8173782.pem /etc/ssl/certs/8173782.pem"
	I1218 23:55:00.803691  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8173782.pem
	I1218 23:55:00.808365  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 23:39 /usr/share/ca-certificates/8173782.pem
	I1218 23:55:00.808664  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 23:39 /usr/share/ca-certificates/8173782.pem
	I1218 23:55:00.808724  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8173782.pem
	I1218 23:55:00.817157  881462 command_runner.go:130] > 3ec20f2e
	I1218 23:55:00.817677  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8173782.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 23:55:00.830009  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 23:55:00.841982  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:55:00.846661  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:55:00.846696  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 23:32 /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:55:00.846776  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 23:55:00.855015  881462 command_runner.go:130] > b5213941
	I1218 23:55:00.855463  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 23:55:00.867357  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/817378.pem && ln -fs /usr/share/ca-certificates/817378.pem /etc/ssl/certs/817378.pem"
	I1218 23:55:00.879165  881462 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/817378.pem
	I1218 23:55:00.883750  881462 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 23:39 /usr/share/ca-certificates/817378.pem
	I1218 23:55:00.883783  881462 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 23:39 /usr/share/ca-certificates/817378.pem
	I1218 23:55:00.883856  881462 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/817378.pem
	I1218 23:55:00.892382  881462 command_runner.go:130] > 51391683
	I1218 23:55:00.892821  881462 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/817378.pem /etc/ssl/certs/51391683.0"
	I1218 23:55:00.904401  881462 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 23:55:00.908884  881462 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:55:00.908964  881462 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 23:55:00.909075  881462 ssh_runner.go:195] Run: crio config
	I1218 23:55:00.960385  881462 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1218 23:55:00.960421  881462 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1218 23:55:00.960431  881462 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1218 23:55:00.960435  881462 command_runner.go:130] > #
	I1218 23:55:00.960444  881462 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1218 23:55:00.960452  881462 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1218 23:55:00.960462  881462 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1218 23:55:00.960476  881462 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1218 23:55:00.960488  881462 command_runner.go:130] > # reload'.
	I1218 23:55:00.960500  881462 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1218 23:55:00.960508  881462 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1218 23:55:00.960528  881462 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1218 23:55:00.960535  881462 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1218 23:55:00.960539  881462 command_runner.go:130] > [crio]
	I1218 23:55:00.960547  881462 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1218 23:55:00.960560  881462 command_runner.go:130] > # containers images, in this directory.
	I1218 23:55:00.960575  881462 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1218 23:55:00.960583  881462 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1218 23:55:00.960592  881462 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1218 23:55:00.960599  881462 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1218 23:55:00.960607  881462 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1218 23:55:00.960616  881462 command_runner.go:130] > # storage_driver = "vfs"
	I1218 23:55:00.960624  881462 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1218 23:55:00.960638  881462 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1218 23:55:00.960646  881462 command_runner.go:130] > # storage_option = [
	I1218 23:55:00.960657  881462 command_runner.go:130] > # ]
	I1218 23:55:00.960666  881462 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1218 23:55:00.960677  881462 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1218 23:55:00.960683  881462 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1218 23:55:00.960692  881462 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1218 23:55:00.960700  881462 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1218 23:55:00.960715  881462 command_runner.go:130] > # always happen on a node reboot
	I1218 23:55:00.960722  881462 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1218 23:55:00.960731  881462 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1218 23:55:00.960740  881462 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1218 23:55:00.960752  881462 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1218 23:55:00.960760  881462 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1218 23:55:00.960773  881462 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1218 23:55:00.960788  881462 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1218 23:55:00.960984  881462 command_runner.go:130] > # internal_wipe = true
	I1218 23:55:00.961022  881462 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1218 23:55:00.961044  881462 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1218 23:55:00.961065  881462 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1218 23:55:00.961234  881462 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1218 23:55:00.961328  881462 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1218 23:55:00.961335  881462 command_runner.go:130] > [crio.api]
	I1218 23:55:00.961346  881462 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1218 23:55:00.961358  881462 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1218 23:55:00.961365  881462 command_runner.go:130] > # IP address on which the stream server will listen.
	I1218 23:55:00.961374  881462 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1218 23:55:00.961382  881462 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1218 23:55:00.961392  881462 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1218 23:55:00.961406  881462 command_runner.go:130] > # stream_port = "0"
	I1218 23:55:00.961414  881462 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1218 23:55:00.961419  881462 command_runner.go:130] > # stream_enable_tls = false
	I1218 23:55:00.961426  881462 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1218 23:55:00.961436  881462 command_runner.go:130] > # stream_idle_timeout = ""
	I1218 23:55:00.961444  881462 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1218 23:55:00.961453  881462 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1218 23:55:00.961462  881462 command_runner.go:130] > # minutes.
	I1218 23:55:00.961468  881462 command_runner.go:130] > # stream_tls_cert = ""
	I1218 23:55:00.961487  881462 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1218 23:55:00.961495  881462 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1218 23:55:00.961503  881462 command_runner.go:130] > # stream_tls_key = ""
	I1218 23:55:00.961511  881462 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1218 23:55:00.961518  881462 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1218 23:55:00.961536  881462 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1218 23:55:00.961541  881462 command_runner.go:130] > # stream_tls_ca = ""
	I1218 23:55:00.961557  881462 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1218 23:55:00.961566  881462 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1218 23:55:00.961575  881462 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1218 23:55:00.961583  881462 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1218 23:55:00.961596  881462 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1218 23:55:00.961603  881462 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1218 23:55:00.961612  881462 command_runner.go:130] > [crio.runtime]
	I1218 23:55:00.961627  881462 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1218 23:55:00.961637  881462 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1218 23:55:00.961642  881462 command_runner.go:130] > # "nofile=1024:2048"
	I1218 23:55:00.961649  881462 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1218 23:55:00.961656  881462 command_runner.go:130] > # default_ulimits = [
	I1218 23:55:00.961661  881462 command_runner.go:130] > # ]
	I1218 23:55:00.961669  881462 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1218 23:55:00.961678  881462 command_runner.go:130] > # no_pivot = false
	I1218 23:55:00.961685  881462 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1218 23:55:00.961700  881462 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1218 23:55:00.961711  881462 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1218 23:55:00.961718  881462 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1218 23:55:00.961728  881462 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1218 23:55:00.961736  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1218 23:55:00.961741  881462 command_runner.go:130] > # conmon = ""
	I1218 23:55:00.961749  881462 command_runner.go:130] > # Cgroup setting for conmon
	I1218 23:55:00.961760  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1218 23:55:00.961765  881462 command_runner.go:130] > conmon_cgroup = "pod"
	I1218 23:55:00.961782  881462 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1218 23:55:00.961789  881462 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1218 23:55:00.961801  881462 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1218 23:55:00.961806  881462 command_runner.go:130] > # conmon_env = [
	I1218 23:55:00.961982  881462 command_runner.go:130] > # ]
	I1218 23:55:00.962018  881462 command_runner.go:130] > # Additional environment variables to set for all the
	I1218 23:55:00.962040  881462 command_runner.go:130] > # containers. These are overridden if set in the
	I1218 23:55:00.962060  881462 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1218 23:55:00.962224  881462 command_runner.go:130] > # default_env = [
	I1218 23:55:00.962284  881462 command_runner.go:130] > # ]
	I1218 23:55:00.962292  881462 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1218 23:55:00.962297  881462 command_runner.go:130] > # selinux = false
	I1218 23:55:00.962313  881462 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1218 23:55:00.962324  881462 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1218 23:55:00.962333  881462 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1218 23:55:00.962341  881462 command_runner.go:130] > # seccomp_profile = ""
	I1218 23:55:00.962348  881462 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1218 23:55:00.962358  881462 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1218 23:55:00.962366  881462 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1218 23:55:00.962375  881462 command_runner.go:130] > # which might increase security.
	I1218 23:55:00.962387  881462 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1218 23:55:00.962397  881462 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1218 23:55:00.962405  881462 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1218 23:55:00.962415  881462 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1218 23:55:00.962426  881462 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1218 23:55:00.962432  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:55:00.962441  881462 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1218 23:55:00.962449  881462 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1218 23:55:00.962464  881462 command_runner.go:130] > # the cgroup blockio controller.
	I1218 23:55:00.962472  881462 command_runner.go:130] > # blockio_config_file = ""
	I1218 23:55:00.962480  881462 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1218 23:55:00.962485  881462 command_runner.go:130] > # irqbalance daemon.
	I1218 23:55:00.962494  881462 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1218 23:55:00.962505  881462 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1218 23:55:00.962520  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:55:00.962526  881462 command_runner.go:130] > # rdt_config_file = ""
	I1218 23:55:00.962539  881462 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1218 23:55:00.962547  881462 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1218 23:55:00.962555  881462 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1218 23:55:00.962563  881462 command_runner.go:130] > # separate_pull_cgroup = ""
	I1218 23:55:00.962582  881462 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1218 23:55:00.962593  881462 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1218 23:55:00.962599  881462 command_runner.go:130] > # will be added.
	I1218 23:55:00.962612  881462 command_runner.go:130] > # default_capabilities = [
	I1218 23:55:00.962621  881462 command_runner.go:130] > # 	"CHOWN",
	I1218 23:55:00.962626  881462 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1218 23:55:00.962631  881462 command_runner.go:130] > # 	"FSETID",
	I1218 23:55:00.962796  881462 command_runner.go:130] > # 	"FOWNER",
	I1218 23:55:00.962828  881462 command_runner.go:130] > # 	"SETGID",
	I1218 23:55:00.962967  881462 command_runner.go:130] > # 	"SETUID",
	I1218 23:55:00.963001  881462 command_runner.go:130] > # 	"SETPCAP",
	I1218 23:55:00.963020  881462 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1218 23:55:00.963038  881462 command_runner.go:130] > # 	"KILL",
	I1218 23:55:00.963057  881462 command_runner.go:130] > # ]
	I1218 23:55:00.963094  881462 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1218 23:55:00.963115  881462 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1218 23:55:00.963134  881462 command_runner.go:130] > # add_inheritable_capabilities = true
	I1218 23:55:00.963164  881462 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1218 23:55:00.963187  881462 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1218 23:55:00.963206  881462 command_runner.go:130] > # default_sysctls = [
	I1218 23:55:00.963222  881462 command_runner.go:130] > # ]
	I1218 23:55:00.963241  881462 command_runner.go:130] > # List of devices on the host that a
	I1218 23:55:00.963270  881462 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1218 23:55:00.963292  881462 command_runner.go:130] > # allowed_devices = [
	I1218 23:55:00.963310  881462 command_runner.go:130] > # 	"/dev/fuse",
	I1218 23:55:00.963327  881462 command_runner.go:130] > # ]
	I1218 23:55:00.963345  881462 command_runner.go:130] > # List of additional devices. specified as
	I1218 23:55:00.963384  881462 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1218 23:55:00.963408  881462 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1218 23:55:00.963428  881462 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1218 23:55:00.963445  881462 command_runner.go:130] > # additional_devices = [
	I1218 23:55:00.963462  881462 command_runner.go:130] > # ]
	I1218 23:55:00.963491  881462 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1218 23:55:00.963513  881462 command_runner.go:130] > # cdi_spec_dirs = [
	I1218 23:55:00.963693  881462 command_runner.go:130] > # 	"/etc/cdi",
	I1218 23:55:00.963918  881462 command_runner.go:130] > # 	"/var/run/cdi",
	I1218 23:55:00.964037  881462 command_runner.go:130] > # ]
	I1218 23:55:00.964068  881462 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1218 23:55:00.964091  881462 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1218 23:55:00.964110  881462 command_runner.go:130] > # Defaults to false.
	I1218 23:55:00.964142  881462 command_runner.go:130] > # device_ownership_from_security_context = false
	I1218 23:55:00.964171  881462 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1218 23:55:00.964190  881462 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1218 23:55:00.964207  881462 command_runner.go:130] > # hooks_dir = [
	I1218 23:55:00.964225  881462 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1218 23:55:00.964260  881462 command_runner.go:130] > # ]
	I1218 23:55:00.964280  881462 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1218 23:55:00.964301  881462 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1218 23:55:00.964320  881462 command_runner.go:130] > # its default mounts from the following two files:
	I1218 23:55:00.964345  881462 command_runner.go:130] > #
	I1218 23:55:00.964371  881462 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1218 23:55:00.964391  881462 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1218 23:55:00.964411  881462 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1218 23:55:00.964426  881462 command_runner.go:130] > #
	I1218 23:55:00.964455  881462 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1218 23:55:00.964488  881462 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1218 23:55:00.964510  881462 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1218 23:55:00.964528  881462 command_runner.go:130] > #      only add mounts it finds in this file.
	I1218 23:55:00.964546  881462 command_runner.go:130] > #
	I1218 23:55:00.964576  881462 command_runner.go:130] > # default_mounts_file = ""
	I1218 23:55:00.964602  881462 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1218 23:55:00.964623  881462 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1218 23:55:00.964641  881462 command_runner.go:130] > # pids_limit = 0
	I1218 23:55:00.964661  881462 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1218 23:55:00.964694  881462 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1218 23:55:00.964721  881462 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1218 23:55:00.964745  881462 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1218 23:55:00.964762  881462 command_runner.go:130] > # log_size_max = -1
	I1218 23:55:00.964794  881462 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1218 23:55:00.964825  881462 command_runner.go:130] > # log_to_journald = false
	I1218 23:55:00.964845  881462 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1218 23:55:00.964864  881462 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1218 23:55:00.964882  881462 command_runner.go:130] > # Path to directory for container attach sockets.
	I1218 23:55:00.964909  881462 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1218 23:55:00.964932  881462 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1218 23:55:00.964949  881462 command_runner.go:130] > # bind_mount_prefix = ""
	I1218 23:55:00.964968  881462 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1218 23:55:00.964989  881462 command_runner.go:130] > # read_only = false
	I1218 23:55:00.965015  881462 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1218 23:55:00.965039  881462 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1218 23:55:00.965056  881462 command_runner.go:130] > # live configuration reload.
	I1218 23:55:00.965073  881462 command_runner.go:130] > # log_level = "info"
	I1218 23:55:00.965092  881462 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1218 23:55:00.965119  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:55:00.965140  881462 command_runner.go:130] > # log_filter = ""
	I1218 23:55:00.965161  881462 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1218 23:55:00.965181  881462 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1218 23:55:00.965203  881462 command_runner.go:130] > # separated by comma.
	I1218 23:55:00.965233  881462 command_runner.go:130] > # uid_mappings = ""
	I1218 23:55:00.965253  881462 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1218 23:55:00.965274  881462 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1218 23:55:00.965337  881462 command_runner.go:130] > # separated by comma.
	I1218 23:55:00.965364  881462 command_runner.go:130] > # gid_mappings = ""
	I1218 23:55:00.965391  881462 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1218 23:55:00.965422  881462 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1218 23:55:00.965451  881462 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1218 23:55:00.965470  881462 command_runner.go:130] > # minimum_mappable_uid = -1
	I1218 23:55:00.965490  881462 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1218 23:55:00.965524  881462 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1218 23:55:00.965546  881462 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1218 23:55:00.965570  881462 command_runner.go:130] > # minimum_mappable_gid = -1
	I1218 23:55:00.965589  881462 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1218 23:55:00.965617  881462 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1218 23:55:00.965644  881462 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1218 23:55:00.965666  881462 command_runner.go:130] > # ctr_stop_timeout = 30
	I1218 23:55:00.965686  881462 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1218 23:55:00.965720  881462 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1218 23:55:00.965743  881462 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1218 23:55:00.965760  881462 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1218 23:55:00.965783  881462 command_runner.go:130] > # drop_infra_ctr = true
	I1218 23:55:00.965802  881462 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1218 23:55:00.965839  881462 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1218 23:55:00.965861  881462 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1218 23:55:00.965883  881462 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1218 23:55:00.965913  881462 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1218 23:55:00.965933  881462 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1218 23:55:00.965957  881462 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1218 23:55:00.965993  881462 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1218 23:55:00.966019  881462 command_runner.go:130] > # pinns_path = ""
	I1218 23:55:00.966039  881462 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1218 23:55:00.966061  881462 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1218 23:55:00.966090  881462 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1218 23:55:00.966115  881462 command_runner.go:130] > # default_runtime = "runc"
	I1218 23:55:00.966134  881462 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1218 23:55:00.966160  881462 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1218 23:55:00.966194  881462 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1218 23:55:00.966221  881462 command_runner.go:130] > # creation as a file is not desired either.
	I1218 23:55:00.966243  881462 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1218 23:55:00.966262  881462 command_runner.go:130] > # the hostname is being managed dynamically.
	I1218 23:55:00.966288  881462 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1218 23:55:00.966311  881462 command_runner.go:130] > # ]
	I1218 23:55:00.966332  881462 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1218 23:55:00.966359  881462 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1218 23:55:00.966387  881462 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1218 23:55:00.966420  881462 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1218 23:55:00.966437  881462 command_runner.go:130] > #
	I1218 23:55:00.966461  881462 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1218 23:55:00.966493  881462 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1218 23:55:00.966523  881462 command_runner.go:130] > #  runtime_type = "oci"
	I1218 23:55:00.966541  881462 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1218 23:55:00.966559  881462 command_runner.go:130] > #  privileged_without_host_devices = false
	I1218 23:55:00.966590  881462 command_runner.go:130] > #  allowed_annotations = []
	I1218 23:55:00.966621  881462 command_runner.go:130] > # Where:
	I1218 23:55:00.966645  881462 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1218 23:55:00.966664  881462 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1218 23:55:00.966698  881462 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1218 23:55:00.966725  881462 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1218 23:55:00.966743  881462 command_runner.go:130] > #   in $PATH.
	I1218 23:55:00.966763  881462 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1218 23:55:00.966792  881462 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1218 23:55:00.966820  881462 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1218 23:55:00.966838  881462 command_runner.go:130] > #   state.
	I1218 23:55:00.966859  881462 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1218 23:55:00.966888  881462 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1218 23:55:00.967166  881462 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1218 23:55:00.967357  881462 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1218 23:55:00.968007  881462 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1218 23:55:00.968030  881462 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1218 23:55:00.968037  881462 command_runner.go:130] > #   The currently recognized values are:
	I1218 23:55:00.968045  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1218 23:55:00.968063  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1218 23:55:00.968075  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1218 23:55:00.968087  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1218 23:55:00.968096  881462 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1218 23:55:00.968105  881462 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1218 23:55:00.968117  881462 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1218 23:55:00.968133  881462 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1218 23:55:00.968140  881462 command_runner.go:130] > #   should be moved to the container's cgroup
	I1218 23:55:00.968146  881462 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1218 23:55:00.968152  881462 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1218 23:55:00.968157  881462 command_runner.go:130] > runtime_type = "oci"
	I1218 23:55:00.968163  881462 command_runner.go:130] > runtime_root = "/run/runc"
	I1218 23:55:00.968173  881462 command_runner.go:130] > runtime_config_path = ""
	I1218 23:55:00.968179  881462 command_runner.go:130] > monitor_path = ""
	I1218 23:55:00.968184  881462 command_runner.go:130] > monitor_cgroup = ""
	I1218 23:55:00.968190  881462 command_runner.go:130] > monitor_exec_cgroup = ""
	I1218 23:55:00.968225  881462 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1218 23:55:00.968231  881462 command_runner.go:130] > # running containers
	I1218 23:55:00.968236  881462 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1218 23:55:00.968244  881462 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1218 23:55:00.968252  881462 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1218 23:55:00.968264  881462 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1218 23:55:00.968271  881462 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1218 23:55:00.968277  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1218 23:55:00.968289  881462 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1218 23:55:00.968295  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1218 23:55:00.968301  881462 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1218 23:55:00.968306  881462 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1218 23:55:00.968315  881462 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1218 23:55:00.968321  881462 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1218 23:55:00.968329  881462 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1218 23:55:00.968339  881462 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1218 23:55:00.968364  881462 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1218 23:55:00.968371  881462 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1218 23:55:00.968385  881462 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1218 23:55:00.968394  881462 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1218 23:55:00.968402  881462 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1218 23:55:00.968411  881462 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1218 23:55:00.968415  881462 command_runner.go:130] > # Example:
	I1218 23:55:00.968421  881462 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1218 23:55:00.968427  881462 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1218 23:55:00.968439  881462 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1218 23:55:00.968446  881462 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1218 23:55:00.968451  881462 command_runner.go:130] > # cpuset = 0
	I1218 23:55:00.968456  881462 command_runner.go:130] > # cpushares = "0-1"
	I1218 23:55:00.968460  881462 command_runner.go:130] > # Where:
	I1218 23:55:00.968466  881462 command_runner.go:130] > # The workload name is workload-type.
	I1218 23:55:00.968474  881462 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1218 23:55:00.968481  881462 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1218 23:55:00.968488  881462 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1218 23:55:00.968498  881462 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1218 23:55:00.968511  881462 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1218 23:55:00.968516  881462 command_runner.go:130] > # 
	I1218 23:55:00.968524  881462 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1218 23:55:00.968528  881462 command_runner.go:130] > #
	I1218 23:55:00.968535  881462 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1218 23:55:00.968543  881462 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1218 23:55:00.968555  881462 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1218 23:55:00.968563  881462 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1218 23:55:00.968578  881462 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1218 23:55:00.968583  881462 command_runner.go:130] > [crio.image]
	I1218 23:55:00.968590  881462 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1218 23:55:00.968596  881462 command_runner.go:130] > # default_transport = "docker://"
	I1218 23:55:00.968604  881462 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1218 23:55:00.968612  881462 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1218 23:55:00.968618  881462 command_runner.go:130] > # global_auth_file = ""
	I1218 23:55:00.968630  881462 command_runner.go:130] > # The image used to instantiate infra containers.
	I1218 23:55:00.968637  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:55:00.968649  881462 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1218 23:55:00.968660  881462 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1218 23:55:00.968668  881462 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1218 23:55:00.968674  881462 command_runner.go:130] > # This option supports live configuration reload.
	I1218 23:55:00.968680  881462 command_runner.go:130] > # pause_image_auth_file = ""
	I1218 23:55:00.968687  881462 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1218 23:55:00.968695  881462 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1218 23:55:00.968702  881462 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1218 23:55:00.968709  881462 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1218 23:55:00.968714  881462 command_runner.go:130] > # pause_command = "/pause"
	I1218 23:55:00.968728  881462 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1218 23:55:00.968738  881462 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1218 23:55:00.968745  881462 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1218 23:55:00.968752  881462 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1218 23:55:00.968759  881462 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1218 23:55:00.968764  881462 command_runner.go:130] > # signature_policy = ""
	I1218 23:55:00.968771  881462 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1218 23:55:00.968779  881462 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1218 23:55:00.968784  881462 command_runner.go:130] > # changing them here.
	I1218 23:55:00.968789  881462 command_runner.go:130] > # insecure_registries = [
	I1218 23:55:00.968798  881462 command_runner.go:130] > # ]
	I1218 23:55:00.968806  881462 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1218 23:55:00.968812  881462 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1218 23:55:00.968817  881462 command_runner.go:130] > # image_volumes = "mkdir"
	I1218 23:55:00.968824  881462 command_runner.go:130] > # Temporary directory to use for storing big files
	I1218 23:55:00.968829  881462 command_runner.go:130] > # big_files_temporary_dir = ""
	I1218 23:55:00.968837  881462 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1218 23:55:00.968841  881462 command_runner.go:130] > # CNI plugins.
	I1218 23:55:00.968846  881462 command_runner.go:130] > [crio.network]
	I1218 23:55:00.968856  881462 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1218 23:55:00.968863  881462 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1218 23:55:00.968874  881462 command_runner.go:130] > # cni_default_network = ""
	I1218 23:55:00.968881  881462 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1218 23:55:00.968888  881462 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1218 23:55:00.968900  881462 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1218 23:55:00.968904  881462 command_runner.go:130] > # plugin_dirs = [
	I1218 23:55:00.968911  881462 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1218 23:55:00.968915  881462 command_runner.go:130] > # ]
	I1218 23:55:00.968922  881462 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1218 23:55:00.968927  881462 command_runner.go:130] > [crio.metrics]
	I1218 23:55:00.968934  881462 command_runner.go:130] > # Globally enable or disable metrics support.
	I1218 23:55:00.968950  881462 command_runner.go:130] > # enable_metrics = false
	I1218 23:55:00.968956  881462 command_runner.go:130] > # Specify enabled metrics collectors.
	I1218 23:55:00.968962  881462 command_runner.go:130] > # Per default all metrics are enabled.
	I1218 23:55:00.968969  881462 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1218 23:55:00.968977  881462 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1218 23:55:00.968984  881462 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1218 23:55:00.968989  881462 command_runner.go:130] > # metrics_collectors = [
	I1218 23:55:00.968994  881462 command_runner.go:130] > # 	"operations",
	I1218 23:55:00.969004  881462 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1218 23:55:00.969010  881462 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1218 23:55:00.969029  881462 command_runner.go:130] > # 	"operations_errors",
	I1218 23:55:00.969035  881462 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1218 23:55:00.969040  881462 command_runner.go:130] > # 	"image_pulls_by_name",
	I1218 23:55:00.969050  881462 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1218 23:55:00.969056  881462 command_runner.go:130] > # 	"image_pulls_failures",
	I1218 23:55:00.969069  881462 command_runner.go:130] > # 	"image_pulls_successes",
	I1218 23:55:00.969075  881462 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1218 23:55:00.969080  881462 command_runner.go:130] > # 	"image_layer_reuse",
	I1218 23:55:00.969085  881462 command_runner.go:130] > # 	"containers_oom_total",
	I1218 23:55:00.969091  881462 command_runner.go:130] > # 	"containers_oom",
	I1218 23:55:00.969110  881462 command_runner.go:130] > # 	"processes_defunct",
	I1218 23:55:00.969115  881462 command_runner.go:130] > # 	"operations_total",
	I1218 23:55:00.969126  881462 command_runner.go:130] > # 	"operations_latency_seconds",
	I1218 23:55:00.969132  881462 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1218 23:55:00.969143  881462 command_runner.go:130] > # 	"operations_errors_total",
	I1218 23:55:00.969148  881462 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1218 23:55:00.969159  881462 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1218 23:55:00.969164  881462 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1218 23:55:00.969179  881462 command_runner.go:130] > # 	"image_pulls_success_total",
	I1218 23:55:00.969190  881462 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1218 23:55:00.969196  881462 command_runner.go:130] > # 	"containers_oom_count_total",
	I1218 23:55:00.969202  881462 command_runner.go:130] > # ]
	I1218 23:55:00.969212  881462 command_runner.go:130] > # The port on which the metrics server will listen.
	I1218 23:55:00.969217  881462 command_runner.go:130] > # metrics_port = 9090
	I1218 23:55:00.969227  881462 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1218 23:55:00.969233  881462 command_runner.go:130] > # metrics_socket = ""
	I1218 23:55:00.969251  881462 command_runner.go:130] > # The certificate for the secure metrics server.
	I1218 23:55:00.969260  881462 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1218 23:55:00.969272  881462 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1218 23:55:00.969299  881462 command_runner.go:130] > # certificate on any modification event.
	I1218 23:55:00.969313  881462 command_runner.go:130] > # metrics_cert = ""
	I1218 23:55:00.969326  881462 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1218 23:55:00.969336  881462 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1218 23:55:00.969341  881462 command_runner.go:130] > # metrics_key = ""
	I1218 23:55:00.969349  881462 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1218 23:55:00.969354  881462 command_runner.go:130] > [crio.tracing]
	I1218 23:55:00.969364  881462 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1218 23:55:00.969375  881462 command_runner.go:130] > # enable_tracing = false
	I1218 23:55:00.969386  881462 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1218 23:55:00.969397  881462 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1218 23:55:00.969407  881462 command_runner.go:130] > # Number of samples to collect per million spans.
	I1218 23:55:00.969414  881462 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1218 23:55:00.969428  881462 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1218 23:55:00.969433  881462 command_runner.go:130] > [crio.stats]
	I1218 23:55:00.969440  881462 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1218 23:55:00.969447  881462 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1218 23:55:00.969456  881462 command_runner.go:130] > # stats_collection_period = 0
	I1218 23:55:00.969692  881462 command_runner.go:130] ! time="2023-12-18 23:55:00.957910491Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1218 23:55:00.969722  881462 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1218 23:55:00.969854  881462 cni.go:84] Creating CNI manager for ""
	I1218 23:55:00.969885  881462 cni.go:136] 2 nodes found, recommending kindnet
	I1218 23:55:00.969896  881462 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 23:55:00.969928  881462 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-320272 NodeName:multinode-320272-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 23:55:00.970079  881462 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-320272-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 23:55:00.970136  881462 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-320272-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 23:55:00.970221  881462 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 23:55:00.980607  881462 command_runner.go:130] > kubeadm
	I1218 23:55:00.980692  881462 command_runner.go:130] > kubectl
	I1218 23:55:00.980713  881462 command_runner.go:130] > kubelet
	I1218 23:55:00.981693  881462 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 23:55:00.981779  881462 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1218 23:55:00.992968  881462 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1218 23:55:01.020537  881462 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 23:55:01.043404  881462 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1218 23:55:01.048043  881462 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 23:55:01.062004  881462 host.go:66] Checking if "multinode-320272" exists ...
	I1218 23:55:01.062293  881462 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:55:01.062559  881462 start.go:304] JoinCluster: &{Name:multinode-320272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-320272 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:55:01.062645  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1218 23:55:01.062701  881462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:55:01.085620  881462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:55:01.269414  881462 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x9h4cu.2xbg0apq7mptd56y --discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c 
	I1218 23:55:01.269459  881462 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1218 23:55:01.269491  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9h4cu.2xbg0apq7mptd56y --discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-320272-m02"
	I1218 23:55:01.310677  881462 command_runner.go:130] ! W1218 23:55:01.310302    1024 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1218 23:55:01.360765  881462 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 23:55:01.446185  881462 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 23:55:04.136938  881462 command_runner.go:130] > [preflight] Running pre-flight checks
	I1218 23:55:04.136968  881462 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1218 23:55:04.136977  881462 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1051-aws
	I1218 23:55:04.136983  881462 command_runner.go:130] > OS: Linux
	I1218 23:55:04.136989  881462 command_runner.go:130] > CGROUPS_CPU: enabled
	I1218 23:55:04.136997  881462 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1218 23:55:04.137003  881462 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1218 23:55:04.137010  881462 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1218 23:55:04.137020  881462 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1218 23:55:04.137026  881462 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1218 23:55:04.137045  881462 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1218 23:55:04.137056  881462 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1218 23:55:04.137063  881462 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1218 23:55:04.137074  881462 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1218 23:55:04.137084  881462 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1218 23:55:04.137095  881462 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 23:55:04.137104  881462 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 23:55:04.137111  881462 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 23:55:04.137125  881462 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1218 23:55:04.137131  881462 command_runner.go:130] > This node has joined the cluster:
	I1218 23:55:04.137143  881462 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1218 23:55:04.137151  881462 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1218 23:55:04.137162  881462 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1218 23:55:04.137175  881462 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x9h4cu.2xbg0apq7mptd56y --discovery-token-ca-cert-hash sha256:c459ee434ab47b92cbaa79d344c36da497111b64dc66fca5cb8785b7cbb4349c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-320272-m02": (2.867668494s)
	I1218 23:55:04.137191  881462 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1218 23:55:04.358476  881462 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1218 23:55:04.358578  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=multinode-320272 minikube.k8s.io/updated_at=2023_12_18T23_55_04_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 23:55:04.468106  881462 command_runner.go:130] > node/multinode-320272-m02 labeled
	I1218 23:55:04.471944  881462 start.go:306] JoinCluster complete in 3.40938063s
	I1218 23:55:04.471986  881462 cni.go:84] Creating CNI manager for ""
	I1218 23:55:04.471992  881462 cni.go:136] 2 nodes found, recommending kindnet
	I1218 23:55:04.472044  881462 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 23:55:04.476765  881462 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 23:55:04.476790  881462 command_runner.go:130] >   Size: 4030506   	Blocks: 7880       IO Block: 4096   regular file
	I1218 23:55:04.476799  881462 command_runner.go:130] > Device: 36h/54d	Inode: 3640141     Links: 1
	I1218 23:55:04.476809  881462 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 23:55:04.476820  881462 command_runner.go:130] > Access: 2023-12-04 16:39:54.000000000 +0000
	I1218 23:55:04.476829  881462 command_runner.go:130] > Modify: 2023-12-04 16:39:54.000000000 +0000
	I1218 23:55:04.476836  881462 command_runner.go:130] > Change: 2023-12-18 23:32:04.107136004 +0000
	I1218 23:55:04.476844  881462 command_runner.go:130] >  Birth: 2023-12-18 23:32:04.063136379 +0000
	I1218 23:55:04.477368  881462 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 23:55:04.477387  881462 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 23:55:04.508147  881462 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 23:55:04.882894  881462 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1218 23:55:04.882920  881462 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1218 23:55:04.882927  881462 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1218 23:55:04.882933  881462 command_runner.go:130] > daemonset.apps/kindnet configured
	I1218 23:55:04.883316  881462 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:55:04.883581  881462 kapi.go:59] client config for multinode-320272: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:55:04.883906  881462 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 23:55:04.883922  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:04.883931  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:04.883939  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:04.888777  881462 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 23:55:04.888804  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:04.888814  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:04 GMT
	I1218 23:55:04.888826  881462 round_trippers.go:580]     Audit-Id: a5b2bfae-b5e9-4701-9b59-66923448284d
	I1218 23:55:04.888832  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:04.888839  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:04.888849  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:04.888856  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:04.888866  881462 round_trippers.go:580]     Content-Length: 291
	I1218 23:55:04.889299  881462 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"506a1bf3-be48-435b-8d09-a6642bb1a363","resourceVersion":"460","creationTimestamp":"2023-12-18T23:54:02Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 23:55:04.889408  881462 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-320272" context rescaled to 1 replicas
	I1218 23:55:04.889439  881462 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1218 23:55:04.892931  881462 out.go:177] * Verifying Kubernetes components...
	I1218 23:55:04.895038  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:55:04.927209  881462 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:55:04.927465  881462 kapi.go:59] client config for multinode-320272: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/profiles/multinode-320272/client.key", CAFile:"/home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 23:55:04.927724  881462 node_ready.go:35] waiting up to 6m0s for node "multinode-320272-m02" to be "Ready" ...
	I1218 23:55:04.927797  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272-m02
	I1218 23:55:04.927807  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:04.927815  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:04.927822  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:04.930638  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:04.930662  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:04.930670  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:04.930677  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:04.930690  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:04.930697  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:04 GMT
	I1218 23:55:04.930703  881462 round_trippers.go:580]     Audit-Id: 63b9501c-a651-44c0-8e14-610b52666130
	I1218 23:55:04.930712  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:04.931178  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272-m02","uid":"fe001c8e-baec-4fe3-9a83-d2ed336abca4","resourceVersion":"498","creationTimestamp":"2023-12-18T23:55:04Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T23_55_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:55:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 5735 chars]
	I1218 23:55:05.428381  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272-m02
	I1218 23:55:05.428411  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.428422  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.428429  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.430875  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.430902  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.430911  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.430922  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.430929  881462 round_trippers.go:580]     Audit-Id: fd02e951-15f8-48cb-aecc-039e2f85c835
	I1218 23:55:05.430935  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.430944  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.430950  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.431344  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272-m02","uid":"fe001c8e-baec-4fe3-9a83-d2ed336abca4","resourceVersion":"509","creationTimestamp":"2023-12-18T23:55:04Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T23_55_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:55:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5844 chars]
	I1218 23:55:05.928086  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272-m02
	I1218 23:55:05.928109  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.928119  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.928125  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.930518  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.930541  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.930549  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.930556  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.930562  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.930569  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.930575  881462 round_trippers.go:580]     Audit-Id: 69c70455-2bab-4baa-884a-e2904d844c26
	I1218 23:55:05.930585  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.930938  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272-m02","uid":"fe001c8e-baec-4fe3-9a83-d2ed336abca4","resourceVersion":"516","creationTimestamp":"2023-12-18T23:55:04Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T23_55_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:55:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1218 23:55:05.931332  881462 node_ready.go:49] node "multinode-320272-m02" has status "Ready":"True"
	I1218 23:55:05.931357  881462 node_ready.go:38] duration metric: took 1.003611634s waiting for node "multinode-320272-m02" to be "Ready" ...
	I1218 23:55:05.931371  881462 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:55:05.931433  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1218 23:55:05.931443  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.931451  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.931461  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.934819  881462 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 23:55:05.934840  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.934848  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.934855  881462 round_trippers.go:580]     Audit-Id: 4b62e67a-9eef-4226-abb6-0cd1864c2af4
	I1218 23:55:05.934862  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.934869  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.934875  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.934881  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.935797  881462 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"516"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"456","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1218 23:55:05.938705  881462 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.938803  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fwqn2
	I1218 23:55:05.938815  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.938825  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.938832  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.941264  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.941287  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.941301  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.941308  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.941314  881462 round_trippers.go:580]     Audit-Id: 03812232-af9a-4cdb-b69e-81b9237a626c
	I1218 23:55:05.941321  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.941327  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.941337  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.941473  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fwqn2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9a076607-92d0-42d5-a2e5-95580b423c69","resourceVersion":"456","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"21efa5fd-ab76-46b8-be2e-c7171335b5f1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21efa5fd-ab76-46b8-be2e-c7171335b5f1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1218 23:55:05.941980  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:05.941994  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.942003  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.942010  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.944271  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.944296  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.944304  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.944310  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.944317  881462 round_trippers.go:580]     Audit-Id: 8a587ad8-01e1-46d5-99f6-caf5f21690b1
	I1218 23:55:05.944323  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.944329  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.944338  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.944475  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:05.944844  881462 pod_ready.go:92] pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:05.944862  881462 pod_ready.go:81] duration metric: took 6.127301ms waiting for pod "coredns-5dd5756b68-fwqn2" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.944872  881462 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.944934  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-320272
	I1218 23:55:05.944944  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.944951  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.944958  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.947117  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.947138  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.947146  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.947152  881462 round_trippers.go:580]     Audit-Id: 1d4fc2a3-ab3e-4d41-9ab0-2bb453ff1685
	I1218 23:55:05.947158  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.947164  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.947170  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.947177  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.947271  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-320272","namespace":"kube-system","uid":"e8faa391-587b-49be-b29e-11f12f8c02bc","resourceVersion":"425","creationTimestamp":"2023-12-18T23:54:02Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f1b69f0c003b97673c6630ae2cd61703","kubernetes.io/config.mirror":"f1b69f0c003b97673c6630ae2cd61703","kubernetes.io/config.seen":"2023-12-18T23:53:55.188275540Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1218 23:55:05.947767  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:05.947785  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.947793  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.947801  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.950257  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.950280  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.950287  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.950301  881462 round_trippers.go:580]     Audit-Id: f630df94-ecd6-46aa-962c-1f3a021e4aa7
	I1218 23:55:05.950308  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.950318  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.950330  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.950337  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.950443  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:05.950839  881462 pod_ready.go:92] pod "etcd-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:05.950855  881462 pod_ready.go:81] duration metric: took 5.973694ms waiting for pod "etcd-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.950872  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.950926  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-320272
	I1218 23:55:05.950937  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.950944  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.950951  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.953243  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.953298  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.953320  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.953340  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.953371  881462 round_trippers.go:580]     Audit-Id: 6bd2efc4-177a-46b1-bd7d-67d5445390c7
	I1218 23:55:05.953398  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.953410  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.953416  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.953522  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-320272","namespace":"kube-system","uid":"b160ebe3-121c-420a-a9c1-0315f470178c","resourceVersion":"426","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"509950bfc349e4a24d79b32d45049002","kubernetes.io/config.mirror":"509950bfc349e4a24d79b32d45049002","kubernetes.io/config.seen":"2023-12-18T23:54:02.715849151Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1218 23:55:05.954032  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:05.954049  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.954056  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.954063  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.956276  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.956298  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.956306  881462 round_trippers.go:580]     Audit-Id: db175daa-bcf0-496b-8c52-b23e169e14a1
	I1218 23:55:05.956313  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.956320  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.956326  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.956336  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.956342  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.956574  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:05.956956  881462 pod_ready.go:92] pod "kube-apiserver-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:05.956974  881462 pod_ready.go:81] duration metric: took 6.095113ms waiting for pod "kube-apiserver-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.956987  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.957049  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-320272
	I1218 23:55:05.957061  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.957068  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.957075  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.959342  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.959365  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.959373  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.959380  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.959389  881462 round_trippers.go:580]     Audit-Id: 592012d4-371c-42ab-812b-a012f1aef895
	I1218 23:55:05.959395  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.959402  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.959408  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.959710  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-320272","namespace":"kube-system","uid":"9b5d1951-61b5-4237-9e19-dcf9fd90a729","resourceVersion":"427","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa4353254f730a64c4fb2fddfe7d9122","kubernetes.io/config.mirror":"aa4353254f730a64c4fb2fddfe7d9122","kubernetes.io/config.seen":"2023-12-18T23:54:02.715854968Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1218 23:55:05.960270  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:05.960287  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:05.960296  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:05.960308  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:05.962547  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:05.962602  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:05.962624  881462 round_trippers.go:580]     Audit-Id: 0a7cac0f-0c3f-4e78-9aea-860e988bdff6
	I1218 23:55:05.962644  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:05.962679  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:05.962701  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:05.962720  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:05.962738  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:05 GMT
	I1218 23:55:05.962880  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:05.963301  881462 pod_ready.go:92] pod "kube-controller-manager-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:05.963322  881462 pod_ready.go:81] duration metric: took 6.323854ms waiting for pod "kube-controller-manager-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:05.963344  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54h89" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:06.128821  881462 request.go:629] Waited for 165.391718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54h89
	I1218 23:55:06.128910  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-54h89
	I1218 23:55:06.128925  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:06.128934  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:06.128942  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:06.131817  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:06.131882  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:06.131905  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:06 GMT
	I1218 23:55:06.131927  881462 round_trippers.go:580]     Audit-Id: 2c1369d2-1346-4160-9627-5a3583d9a0f9
	I1218 23:55:06.131975  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:06.132000  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:06.132015  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:06.132022  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:06.132180  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-54h89","generateName":"kube-proxy-","namespace":"kube-system","uid":"49bbd70d-f3b8-438d-8a7a-ad0a46e872b0","resourceVersion":"418","creationTimestamp":"2023-12-18T23:54:15Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"db4d7c1a-b61e-4603-bf4e-b58f24456242","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db4d7c1a-b61e-4603-bf4e-b58f24456242\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1218 23:55:06.329082  881462 request.go:629] Waited for 196.380847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:06.329201  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:06.329214  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:06.329224  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:06.329231  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:06.331557  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:06.331585  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:06.331593  881462 round_trippers.go:580]     Audit-Id: 89051591-f548-40be-ae93-157c6f7a1e16
	I1218 23:55:06.331599  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:06.331605  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:06.331612  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:06.331618  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:06.331629  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:06 GMT
	I1218 23:55:06.331965  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:06.332362  881462 pod_ready.go:92] pod "kube-proxy-54h89" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:06.332381  881462 pod_ready.go:81] duration metric: took 369.021992ms waiting for pod "kube-proxy-54h89" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:06.332391  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bq8nw" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:06.528772  881462 request.go:629] Waited for 196.311621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bq8nw
	I1218 23:55:06.528897  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bq8nw
	I1218 23:55:06.528908  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:06.528918  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:06.528925  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:06.531730  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:06.531754  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:06.531763  881462 round_trippers.go:580]     Audit-Id: 60ac4bba-35f6-41e0-b943-a2e1805df607
	I1218 23:55:06.531770  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:06.531776  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:06.531783  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:06.531790  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:06.531797  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:06 GMT
	I1218 23:55:06.531999  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bq8nw","generateName":"kube-proxy-","namespace":"kube-system","uid":"732986f1-1f4a-4090-914b-0906615ff086","resourceVersion":"512","creationTimestamp":"2023-12-18T23:55:04Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"db4d7c1a-b61e-4603-bf4e-b58f24456242","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:55:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db4d7c1a-b61e-4603-bf4e-b58f24456242\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1218 23:55:06.728809  881462 request.go:629] Waited for 196.316659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-320272-m02
	I1218 23:55:06.728931  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272-m02
	I1218 23:55:06.728944  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:06.728953  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:06.728964  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:06.731490  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:06.731514  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:06.731522  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:06.731530  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:06 GMT
	I1218 23:55:06.731536  881462 round_trippers.go:580]     Audit-Id: 10dad91c-a746-4995-b141-227987c76511
	I1218 23:55:06.731542  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:06.731556  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:06.731566  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:06.731669  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272-m02","uid":"fe001c8e-baec-4fe3-9a83-d2ed336abca4","resourceVersion":"516","creationTimestamp":"2023-12-18T23:55:04Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T23_55_04_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:55:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 5930 chars]
	I1218 23:55:06.732093  881462 pod_ready.go:92] pod "kube-proxy-bq8nw" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:06.732110  881462 pod_ready.go:81] duration metric: took 399.709814ms waiting for pod "kube-proxy-bq8nw" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:06.732121  881462 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:06.928406  881462 request.go:629] Waited for 196.219995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-320272
	I1218 23:55:06.928472  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-320272
	I1218 23:55:06.928483  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:06.928492  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:06.928499  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:06.930997  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:06.931024  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:06.931033  881462 round_trippers.go:580]     Audit-Id: cfd390b9-3830-4271-a68b-f7ee7571bfba
	I1218 23:55:06.931039  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:06.931045  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:06.931052  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:06.931058  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:06.931065  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:06 GMT
	I1218 23:55:06.931179  881462 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-320272","namespace":"kube-system","uid":"135e8712-3a4a-48ca-aede-e77448583234","resourceVersion":"424","creationTimestamp":"2023-12-18T23:54:03Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"194b5fdf4303a130d34b89fd0d0d02aa","kubernetes.io/config.mirror":"194b5fdf4303a130d34b89fd0d0d02aa","kubernetes.io/config.seen":"2023-12-18T23:54:02.715856092Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T23:54:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1218 23:55:07.128973  881462 request.go:629] Waited for 197.327163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:07.129046  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-320272
	I1218 23:55:07.129056  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:07.129065  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:07.129073  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:07.131750  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:07.131777  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:07.131786  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:07 GMT
	I1218 23:55:07.131792  881462 round_trippers.go:580]     Audit-Id: 1386e2e1-5ab1-4915-9a86-4ea7e7b27e9d
	I1218 23:55:07.131798  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:07.131805  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:07.131811  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:07.131817  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:07.132255  881462 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-12-18T23:53:59Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1218 23:55:07.132681  881462 pod_ready.go:92] pod "kube-scheduler-multinode-320272" in "kube-system" namespace has status "Ready":"True"
	I1218 23:55:07.132700  881462 pod_ready.go:81] duration metric: took 400.572586ms waiting for pod "kube-scheduler-multinode-320272" in "kube-system" namespace to be "Ready" ...
	I1218 23:55:07.132713  881462 pod_ready.go:38] duration metric: took 1.201331053s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 23:55:07.132733  881462 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 23:55:07.132791  881462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:55:07.146759  881462 system_svc.go:56] duration metric: took 14.015141ms WaitForService to wait for kubelet.
	I1218 23:55:07.146789  881462 kubeadm.go:581] duration metric: took 2.257315907s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 23:55:07.146819  881462 node_conditions.go:102] verifying NodePressure condition ...
	I1218 23:55:07.328147  881462 request.go:629] Waited for 181.254304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1218 23:55:07.328200  881462 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1218 23:55:07.328206  881462 round_trippers.go:469] Request Headers:
	I1218 23:55:07.328216  881462 round_trippers.go:473]     Accept: application/json, */*
	I1218 23:55:07.328227  881462 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1218 23:55:07.330963  881462 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 23:55:07.330987  881462 round_trippers.go:577] Response Headers:
	I1218 23:55:07.330995  881462 round_trippers.go:580]     Content-Type: application/json
	I1218 23:55:07.331002  881462 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: b9e28263-b01a-48e3-a0fb-56029faca31a
	I1218 23:55:07.331008  881462 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: aca22c28-1cd2-4ba2-93f2-2bfecbbff29d
	I1218 23:55:07.331015  881462 round_trippers.go:580]     Date: Mon, 18 Dec 2023 23:55:07 GMT
	I1218 23:55:07.331021  881462 round_trippers.go:580]     Audit-Id: 4127538a-64bd-480d-9a32-89cc5bc1bd8f
	I1218 23:55:07.331027  881462 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 23:55:07.331189  881462 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"517"},"items":[{"metadata":{"name":"multinode-320272","uid":"3e8050ce-fccf-4a84-85d8-100f694fd390","resourceVersion":"437","creationTimestamp":"2023-12-18T23:53:59Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-320272","kubernetes.io/os":"linux","minikube.k8s.io/commit":"0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2","minikube.k8s.io/name":"multinode-320272","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T23_54_03_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 13004 chars]
	I1218 23:55:07.331842  881462 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:55:07.331866  881462 node_conditions.go:123] node cpu capacity is 2
	I1218 23:55:07.331876  881462 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 23:55:07.331881  881462 node_conditions.go:123] node cpu capacity is 2
	I1218 23:55:07.331886  881462 node_conditions.go:105] duration metric: took 185.060457ms to run NodePressure ...
	I1218 23:55:07.331902  881462 start.go:228] waiting for startup goroutines ...
	I1218 23:55:07.331932  881462 start.go:242] writing updated cluster config ...
	I1218 23:55:07.332259  881462 ssh_runner.go:195] Run: rm -f paused
	I1218 23:55:07.401667  881462 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 23:55:07.404679  881462 out.go:177] * Done! kubectl is now configured to use "multinode-320272" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Dec 18 23:54:48 multinode-320272 crio[891]: time="2023-12-18 23:54:48.243628958Z" level=info msg="Starting container: 41a8227b66047c97e123eddc6604a274b94a952108c6b6967fafbb156e784a91" id=980fe175-4299-4f8f-add9-d6d8af2336e9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 18 23:54:48 multinode-320272 crio[891]: time="2023-12-18 23:54:48.257373406Z" level=info msg="Started container" PID=1903 containerID=41a8227b66047c97e123eddc6604a274b94a952108c6b6967fafbb156e784a91 description=kube-system/storage-provisioner/storage-provisioner id=980fe175-4299-4f8f-add9-d6d8af2336e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06ffdfdfc2a8902a8ee5a705b4b686b9c90e2e2eb5ab7a02182391a1a966df5d
	Dec 18 23:54:48 multinode-320272 crio[891]: time="2023-12-18 23:54:48.267732556Z" level=info msg="Created container 1393d0f8efb19ea2fe77c40d7ddd88da8f714220c6405fbd46618d4d89d282c2: kube-system/coredns-5dd5756b68-fwqn2/coredns" id=9ab4098e-3cea-4934-8283-d1c5781c7853 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 18 23:54:48 multinode-320272 crio[891]: time="2023-12-18 23:54:48.269442050Z" level=info msg="Starting container: 1393d0f8efb19ea2fe77c40d7ddd88da8f714220c6405fbd46618d4d89d282c2" id=ac2f4724-3442-41d2-a3fd-1f0066d62c67 name=/runtime.v1.RuntimeService/StartContainer
	Dec 18 23:54:48 multinode-320272 crio[891]: time="2023-12-18 23:54:48.288537625Z" level=info msg="Started container" PID=1921 containerID=1393d0f8efb19ea2fe77c40d7ddd88da8f714220c6405fbd46618d4d89d282c2 description=kube-system/coredns-5dd5756b68-fwqn2/coredns id=ac2f4724-3442-41d2-a3fd-1f0066d62c67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=699a9e42b14d318dbf181b70efcb0e20ce6af0b5517842d1213f8146df64956d
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.651054139Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-9rw5h/POD" id=388ef095-d35e-47fb-850a-71320a894def name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.651108645Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.672529355Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-9rw5h Namespace:default ID:f75f3bf0a4eed4eecbc6b7e1e4478786e49e1f2782f8e8463f49eca74cb63fac UID:6fc4e0c1-b531-43a9-a4b6-f0e06f930ed2 NetNS:/var/run/netns/3b27f3af-bc5d-48fd-acb6-222b75f78e72 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.672720624Z" level=info msg="Adding pod default_busybox-5bc68d56bd-9rw5h to CNI network \"kindnet\" (type=ptp)"
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.682648818Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-9rw5h Namespace:default ID:f75f3bf0a4eed4eecbc6b7e1e4478786e49e1f2782f8e8463f49eca74cb63fac UID:6fc4e0c1-b531-43a9-a4b6-f0e06f930ed2 NetNS:/var/run/netns/3b27f3af-bc5d-48fd-acb6-222b75f78e72 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.682807283Z" level=info msg="Checking pod default_busybox-5bc68d56bd-9rw5h for CNI network kindnet (type=ptp)"
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.685412492Z" level=info msg="Ran pod sandbox f75f3bf0a4eed4eecbc6b7e1e4478786e49e1f2782f8e8463f49eca74cb63fac with infra container: default/busybox-5bc68d56bd-9rw5h/POD" id=388ef095-d35e-47fb-850a-71320a894def name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.686952969Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=952b9d3b-c053-49c3-a213-27f99fda8dbb name=/runtime.v1.ImageService/ImageStatus
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.687168598Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=952b9d3b-c053-49c3-a213-27f99fda8dbb name=/runtime.v1.ImageService/ImageStatus
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.688916820Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7392826d-c5ce-4dcb-835e-c57e9806d932 name=/runtime.v1.ImageService/PullImage
	Dec 18 23:55:08 multinode-320272 crio[891]: time="2023-12-18 23:55:08.691037946Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 18 23:55:09 multinode-320272 crio[891]: time="2023-12-18 23:55:09.321028497Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.416650594Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=7392826d-c5ce-4dcb-835e-c57e9806d932 name=/runtime.v1.ImageService/PullImage
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.418096500Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=cae912da-5850-4bb8-a7a5-7bfba506f1ab name=/runtime.v1.ImageService/ImageStatus
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.418772179Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cae912da-5850-4bb8-a7a5-7bfba506f1ab name=/runtime.v1.ImageService/ImageStatus
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.420969046Z" level=info msg="Creating container: default/busybox-5bc68d56bd-9rw5h/busybox" id=7bb0ae3f-dec0-4c46-9e80-f806728bb010 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.421180827Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.490420909Z" level=info msg="Created container e102e89670a1ccbfbc40cda6b8772dce6ed2d084624a339153f195ef70c45485: default/busybox-5bc68d56bd-9rw5h/busybox" id=7bb0ae3f-dec0-4c46-9e80-f806728bb010 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.491179345Z" level=info msg="Starting container: e102e89670a1ccbfbc40cda6b8772dce6ed2d084624a339153f195ef70c45485" id=bbb17c08-5ceb-4334-af10-7c90f1aec452 name=/runtime.v1.RuntimeService/StartContainer
	Dec 18 23:55:10 multinode-320272 crio[891]: time="2023-12-18 23:55:10.499003176Z" level=info msg="Started container" PID=2062 containerID=e102e89670a1ccbfbc40cda6b8772dce6ed2d084624a339153f195ef70c45485 description=default/busybox-5bc68d56bd-9rw5h/busybox id=bbb17c08-5ceb-4334-af10-7c90f1aec452 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f75f3bf0a4eed4eecbc6b7e1e4478786e49e1f2782f8e8463f49eca74cb63fac
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e102e89670a1c       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   4 seconds ago        Running             busybox                   0                   f75f3bf0a4eed       busybox-5bc68d56bd-9rw5h
	1393d0f8efb19       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      27 seconds ago       Running             coredns                   0                   699a9e42b14d3       coredns-5dd5756b68-fwqn2
	41a8227b66047       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      27 seconds ago       Running             storage-provisioner       0                   06ffdfdfc2a89       storage-provisioner
	a5e4bbfdbaf22       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      57 seconds ago       Running             kube-proxy                0                   7750ce2736cd3       kube-proxy-54h89
	b232c9ba71cb5       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      58 seconds ago       Running             kindnet-cni               0                   e5e141adb07f5       kindnet-6vp9q
	77649a1a92775       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   9dedbb82060e9       etcd-multinode-320272
	e1e1448a27ba5       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   e76eb0aacf78f       kube-controller-manager-multinode-320272
	23b1a0fd5ac60       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   4b86da37556c5       kube-apiserver-multinode-320272
	685e5d7a2a85a       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   23bb936a2cd67       kube-scheduler-multinode-320272
	
	* 
	* ==> coredns [1393d0f8efb19ea2fe77c40d7ddd88da8f714220c6405fbd46618d4d89d282c2] <==
	* [INFO] 10.244.0.3:50632 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106798s
	[INFO] 10.244.1.2:55913 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107897s
	[INFO] 10.244.1.2:43211 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013300766s
	[INFO] 10.244.1.2:44802 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007314s
	[INFO] 10.244.1.2:47315 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063884s
	[INFO] 10.244.1.2:46525 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005534715s
	[INFO] 10.244.1.2:36438 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077956s
	[INFO] 10.244.1.2:59787 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073559s
	[INFO] 10.244.1.2:52969 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082445s
	[INFO] 10.244.0.3:46883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106584s
	[INFO] 10.244.0.3:52427 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063655s
	[INFO] 10.244.0.3:39969 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057911s
	[INFO] 10.244.0.3:35483 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062087s
	[INFO] 10.244.1.2:56670 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134186s
	[INFO] 10.244.1.2:44009 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091232s
	[INFO] 10.244.1.2:38498 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083167s
	[INFO] 10.244.1.2:39633 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078466s
	[INFO] 10.244.0.3:46366 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111967s
	[INFO] 10.244.0.3:37091 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122658s
	[INFO] 10.244.0.3:37600 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115331s
	[INFO] 10.244.0.3:44484 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098141s
	[INFO] 10.244.1.2:50129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122247s
	[INFO] 10.244.1.2:34538 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000059356s
	[INFO] 10.244.1.2:38193 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060562s
	[INFO] 10.244.1.2:33468 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066461s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-320272
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-320272
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=multinode-320272
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T23_54_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:53:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320272
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:55:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:54:47 +0000   Mon, 18 Dec 2023 23:53:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:54:47 +0000   Mon, 18 Dec 2023 23:53:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:54:47 +0000   Mon, 18 Dec 2023 23:53:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:54:47 +0000   Mon, 18 Dec 2023 23:54:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-320272
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a083fa62cdf4b74bb1c3afdac4d6ecf
	  System UUID:                d423611b-3000-4aa9-ac75-619d7116cdfe
	  Boot ID:                    a58889d6-3937-44de-bde4-55a8fc7b5b88
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9rw5h                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-fwqn2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-multinode-320272                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-6vp9q                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-320272             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-multinode-320272    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-54h89                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-320272             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node multinode-320272 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node multinode-320272 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x8 over 80s)  kubelet          Node multinode-320272 status is now: NodeHasSufficientPID
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node multinode-320272 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node multinode-320272 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s                kubelet          Node multinode-320272 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           60s                node-controller  Node multinode-320272 event: Registered Node multinode-320272 in Controller
	  Normal  NodeReady                28s                kubelet          Node multinode-320272 status is now: NodeReady
	
	
	Name:               multinode-320272-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-320272-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=multinode-320272
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_18T23_55_04_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 23:55:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-320272-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 23:55:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 23:55:05 +0000   Mon, 18 Dec 2023 23:55:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 23:55:05 +0000   Mon, 18 Dec 2023 23:55:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 23:55:05 +0000   Mon, 18 Dec 2023 23:55:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 23:55:05 +0000   Mon, 18 Dec 2023 23:55:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-320272-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 2726530ba6f04ae394ded9522d1deadc
	  System UUID:                39b1805b-e3d6-4b90-9d34-2c2af512880c
	  Boot ID:                    a58889d6-3937-44de-bde4-55a8fc7b5b88
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tdcv5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-jsz7t               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-bq8nw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 13s)  kubelet          Node multinode-320272-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 13s)  kubelet          Node multinode-320272-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 13s)  kubelet          Node multinode-320272-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                node-controller  Node multinode-320272-m02 event: Registered Node multinode-320272-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-320272-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001100] FS-Cache: O-key=[8] 'ccd3c90000000000'
	[  +0.000777] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001007] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=0000000023100663
	[  +0.001130] FS-Cache: N-key=[8] 'ccd3c90000000000'
	[  +0.002924] FS-Cache: Duplicate cookie detected
	[  +0.000781] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001006] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=000000002f76f87c
	[  +0.001129] FS-Cache: O-key=[8] 'ccd3c90000000000'
	[  +0.001007] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001069] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=00000000dda31a8a
	[  +0.001229] FS-Cache: N-key=[8] 'ccd3c90000000000'
	[  +2.393317] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000e43fe467
	[  +0.001091] FS-Cache: O-key=[8] 'cbd3c90000000000'
	[  +0.000775] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=0000000023100663
	[  +0.001106] FS-Cache: N-key=[8] 'cbd3c90000000000'
	[  +0.391545] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001015] FS-Cache: O-cookie d=00000000a818fb90{9p.inode} n=00000000a91c1b1c
	[  +0.001117] FS-Cache: O-key=[8] 'd1d3c90000000000'
	[  +0.000731] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000971] FS-Cache: N-cookie d=00000000a818fb90{9p.inode} n=000000001117d976
	[  +0.001087] FS-Cache: N-key=[8] 'd1d3c90000000000'
	
	* 
	* ==> etcd [77649a1a92775bb19131177cdd876df3251745da7805aa649c167985437a2387] <==
	* {"level":"info","ts":"2023-12-18T23:53:56.095188Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:53:56.095226Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:53:56.095236Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T23:53:56.095304Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-18T23:53:56.095323Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-12-18T23:53:56.095792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-12-18T23:53:56.09619Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-12-18T23:53:56.83199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T23:53:56.832118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T23:53:56.832169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-12-18T23:53:56.832221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T23:53:56.832282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-18T23:53:56.832318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-12-18T23:53:56.832358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-12-18T23:53:56.836168Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-320272 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T23:53:56.838731Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:53:56.838861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:53:56.83997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-18T23:53:56.840028Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T23:53:56.84008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-18T23:53:56.840103Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T23:53:56.840201Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:53:56.840298Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:53:56.840354Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T23:53:56.840932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  23:55:15 up  4:37,  0 users,  load average: 1.77, 1.98, 2.05
	Linux multinode-320272 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b232c9ba71cb5c292c9ef4153aecbb9be8113245f0d2aa7b1116a2834ab62317] <==
	* I1218 23:54:17.020461       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1218 23:54:17.020526       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1218 23:54:17.020642       1 main.go:116] setting mtu 1500 for CNI 
	I1218 23:54:17.020655       1 main.go:146] kindnetd IP family: "ipv4"
	I1218 23:54:17.020665       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1218 23:54:47.342248       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1218 23:54:47.355640       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1218 23:54:47.355672       1 main.go:227] handling current node
	I1218 23:54:57.370003       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1218 23:54:57.370112       1 main.go:227] handling current node
	I1218 23:55:07.382012       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1218 23:55:07.382037       1 main.go:227] handling current node
	I1218 23:55:07.382047       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1218 23:55:07.382053       1 main.go:250] Node multinode-320272-m02 has CIDR [10.244.1.0/24] 
	I1218 23:55:07.382205       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [23b1a0fd5ac602c08d2664dd79431850c04ed2bec82b5159b5c8f87489bd4516] <==
	* I1218 23:53:59.477861       1 autoregister_controller.go:141] Starting autoregister controller
	I1218 23:53:59.477867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1218 23:53:59.477878       1 cache.go:39] Caches are synced for autoregister controller
	I1218 23:53:59.478870       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1218 23:53:59.483537       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1218 23:53:59.484884       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1218 23:53:59.485058       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1218 23:53:59.513980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 23:54:00.198196       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1218 23:54:00.214541       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1218 23:54:00.214605       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1218 23:54:00.865081       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 23:54:00.906618       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1218 23:54:01.006651       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1218 23:54:01.013549       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1218 23:54:01.014807       1 controller.go:624] quota admission added evaluator for: endpoints
	I1218 23:54:01.021442       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 23:54:01.434373       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1218 23:54:02.651882       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1218 23:54:02.667588       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1218 23:54:02.685822       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1218 23:54:15.360807       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1218 23:54:15.364053       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1218 23:55:11.032180       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x40098aa4e0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400bf44b90), ResponseWriter:(*httpsnoop.rw)(0x400bf44b90), Flusher:(*httpsnoop.rw)(0x400bf44b90), CloseNotifier:(*httpsnoop.rw)(0x400bf44b90), Pusher:(*httpsnoop.rw)(0x400bf44b90)}}, encoder:(*versioning.codec)(0x400e13d860), memAllocator:(*runtime.Allocator)(0x400e248f00)})
	E1218 23:55:11.901517       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:34962: write: broken pipe
	
	* 
	* ==> kube-controller-manager [e1e1448a27ba5bfb691384cb8f1fbfaf21a43e7b47072c9c81b6e6205e80acdc] <==
	* I1218 23:54:47.818877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.907µs"
	I1218 23:54:47.844874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.149µs"
	I1218 23:54:48.967790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.788µs"
	I1218 23:54:49.016777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.976289ms"
	I1218 23:54:49.016907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.621µs"
	I1218 23:54:50.143828       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1218 23:55:04.057926       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-320272-m02\" does not exist"
	I1218 23:55:04.079358       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-320272-m02" podCIDRs=["10.244.1.0/24"]
	I1218 23:55:04.084834       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jsz7t"
	I1218 23:55:04.089526       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bq8nw"
	I1218 23:55:05.145844       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-320272-m02"
	I1218 23:55:05.146244       1 event.go:307] "Event occurred" object="multinode-320272-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-320272-m02 event: Registered Node multinode-320272-m02 in Controller"
	I1218 23:55:05.720017       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-320272-m02"
	I1218 23:55:08.280261       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1218 23:55:08.306232       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-tdcv5"
	I1218 23:55:08.326287       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9rw5h"
	I1218 23:55:08.351202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.398876ms"
	I1218 23:55:08.374734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.392119ms"
	I1218 23:55:08.374888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="52.439µs"
	I1218 23:55:08.375251       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.411µs"
	I1218 23:55:10.166399       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-tdcv5" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-tdcv5"
	I1218 23:55:10.632549       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.299614ms"
	I1218 23:55:10.632636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.65µs"
	I1218 23:55:11.023601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.096591ms"
	I1218 23:55:11.024595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.842µs"
	
	* 
	* ==> kube-proxy [a5e4bbfdbaf229659a2c598d724dd2568b9d8034da10536c0ab8a5777533b8d3] <==
	* I1218 23:54:17.538348       1 server_others.go:69] "Using iptables proxy"
	I1218 23:54:17.558455       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1218 23:54:17.584670       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1218 23:54:17.586824       1 server_others.go:152] "Using iptables Proxier"
	I1218 23:54:17.586860       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1218 23:54:17.586867       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1218 23:54:17.586928       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 23:54:17.587153       1 server.go:846] "Version info" version="v1.28.4"
	I1218 23:54:17.587168       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 23:54:17.588603       1 config.go:188] "Starting service config controller"
	I1218 23:54:17.588620       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 23:54:17.588640       1 config.go:97] "Starting endpoint slice config controller"
	I1218 23:54:17.588644       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 23:54:17.589000       1 config.go:315] "Starting node config controller"
	I1218 23:54:17.589014       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 23:54:17.689124       1 shared_informer.go:318] Caches are synced for node config
	I1218 23:54:17.689125       1 shared_informer.go:318] Caches are synced for service config
	I1218 23:54:17.689166       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [685e5d7a2a85a768e9083737f9a5db5288ee8b0d7c289e757e386ed19fe809a4] <==
	* W1218 23:53:59.462987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:53:59.463010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 23:53:59.463066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 23:53:59.463089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1218 23:53:59.463207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 23:53:59.463228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1218 23:54:00.484025       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 23:54:00.484170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 23:54:00.499557       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 23:54:00.500035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 23:54:00.510341       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1218 23:54:00.510454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1218 23:54:00.523156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 23:54:00.523270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 23:54:00.538433       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 23:54:00.538538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1218 23:54:00.635430       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 23:54:00.635533       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 23:54:00.663737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 23:54:00.663868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 23:54:00.668241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 23:54:00.668359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 23:54:00.680904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 23:54:00.681044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1218 23:54:02.944783       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 18 23:54:15 multinode-320272 kubelet[1381]: W1218 23:54:15.646892    1381 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-320272" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-320272' and this object
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: E1218 23:54:15.646903    1381 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-320272" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-320272' and this object
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: I1218 23:54:15.672483    1381 topology_manager.go:215] "Topology Admit Handler" podUID="d2510149-5fa2-49db-ad53-833f8c18ed44" podNamespace="kube-system" podName="kindnet-6vp9q"
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: I1218 23:54:15.723909    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d2510149-5fa2-49db-ad53-833f8c18ed44-cni-cfg\") pod \"kindnet-6vp9q\" (UID: \"d2510149-5fa2-49db-ad53-833f8c18ed44\") " pod="kube-system/kindnet-6vp9q"
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: I1218 23:54:15.723999    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2510149-5fa2-49db-ad53-833f8c18ed44-lib-modules\") pod \"kindnet-6vp9q\" (UID: \"d2510149-5fa2-49db-ad53-833f8c18ed44\") " pod="kube-system/kindnet-6vp9q"
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: I1218 23:54:15.724039    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2510149-5fa2-49db-ad53-833f8c18ed44-xtables-lock\") pod \"kindnet-6vp9q\" (UID: \"d2510149-5fa2-49db-ad53-833f8c18ed44\") " pod="kube-system/kindnet-6vp9q"
	Dec 18 23:54:15 multinode-320272 kubelet[1381]: I1218 23:54:15.724085    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drldl\" (UniqueName: \"kubernetes.io/projected/d2510149-5fa2-49db-ad53-833f8c18ed44-kube-api-access-drldl\") pod \"kindnet-6vp9q\" (UID: \"d2510149-5fa2-49db-ad53-833f8c18ed44\") " pod="kube-system/kindnet-6vp9q"
	Dec 18 23:54:16 multinode-320272 kubelet[1381]: E1218 23:54:16.725082    1381 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 18 23:54:16 multinode-320272 kubelet[1381]: E1218 23:54:16.725209    1381 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/49bbd70d-f3b8-438d-8a7a-ad0a46e872b0-kube-proxy podName:49bbd70d-f3b8-438d-8a7a-ad0a46e872b0 nodeName:}" failed. No retries permitted until 2023-12-18 23:54:17.225184665 +0000 UTC m=+14.609891426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/49bbd70d-f3b8-438d-8a7a-ad0a46e872b0-kube-proxy") pod "kube-proxy-54h89" (UID: "49bbd70d-f3b8-438d-8a7a-ad0a46e872b0") : failed to sync configmap cache: timed out waiting for the condition
	Dec 18 23:54:16 multinode-320272 kubelet[1381]: W1218 23:54:16.889717    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/crio-e5e141adb07f5390459bdce414592997b66308fc6f371041064adb764285d5e2 WatchSource:0}: Error finding container e5e141adb07f5390459bdce414592997b66308fc6f371041064adb764285d5e2: Status 404 returned error can't find the container with id e5e141adb07f5390459bdce414592997b66308fc6f371041064adb764285d5e2
	Dec 18 23:54:17 multinode-320272 kubelet[1381]: W1218 23:54:17.415111    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/crio-7750ce2736cd32d6a31d39739b6ed3912c40e2da31a82710a803f00335e6765d WatchSource:0}: Error finding container 7750ce2736cd32d6a31d39739b6ed3912c40e2da31a82710a803f00335e6765d: Status 404 returned error can't find the container with id 7750ce2736cd32d6a31d39739b6ed3912c40e2da31a82710a803f00335e6765d
	Dec 18 23:54:17 multinode-320272 kubelet[1381]: I1218 23:54:17.922690    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-54h89" podStartSLOduration=2.9226452480000003 podCreationTimestamp="2023-12-18 23:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 23:54:17.906464261 +0000 UTC m=+15.291171021" watchObservedRunningTime="2023-12-18 23:54:17.922645248 +0000 UTC m=+15.307352009"
	Dec 18 23:54:22 multinode-320272 kubelet[1381]: I1218 23:54:22.779851    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6vp9q" podStartSLOduration=7.779509301 podCreationTimestamp="2023-12-18 23:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 23:54:17.923062771 +0000 UTC m=+15.307769548" watchObservedRunningTime="2023-12-18 23:54:22.779509301 +0000 UTC m=+20.164216061"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.784238    1381 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.815492    1381 topology_manager.go:215] "Topology Admit Handler" podUID="aaed796f-c658-46b9-8222-ad7bdb3e9f7d" podNamespace="kube-system" podName="storage-provisioner"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.818237    1381 topology_manager.go:215] "Topology Admit Handler" podUID="9a076607-92d0-42d5-a2e5-95580b423c69" podNamespace="kube-system" podName="coredns-5dd5756b68-fwqn2"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.977085    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a076607-92d0-42d5-a2e5-95580b423c69-config-volume\") pod \"coredns-5dd5756b68-fwqn2\" (UID: \"9a076607-92d0-42d5-a2e5-95580b423c69\") " pod="kube-system/coredns-5dd5756b68-fwqn2"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.977148    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw4gz\" (UniqueName: \"kubernetes.io/projected/9a076607-92d0-42d5-a2e5-95580b423c69-kube-api-access-fw4gz\") pod \"coredns-5dd5756b68-fwqn2\" (UID: \"9a076607-92d0-42d5-a2e5-95580b423c69\") " pod="kube-system/coredns-5dd5756b68-fwqn2"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.977208    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2qpk\" (UniqueName: \"kubernetes.io/projected/aaed796f-c658-46b9-8222-ad7bdb3e9f7d-kube-api-access-w2qpk\") pod \"storage-provisioner\" (UID: \"aaed796f-c658-46b9-8222-ad7bdb3e9f7d\") " pod="kube-system/storage-provisioner"
	Dec 18 23:54:47 multinode-320272 kubelet[1381]: I1218 23:54:47.977230    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/aaed796f-c658-46b9-8222-ad7bdb3e9f7d-tmp\") pod \"storage-provisioner\" (UID: \"aaed796f-c658-46b9-8222-ad7bdb3e9f7d\") " pod="kube-system/storage-provisioner"
	Dec 18 23:54:48 multinode-320272 kubelet[1381]: W1218 23:54:48.177917    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/crio-699a9e42b14d318dbf181b70efcb0e20ce6af0b5517842d1213f8146df64956d WatchSource:0}: Error finding container 699a9e42b14d318dbf181b70efcb0e20ce6af0b5517842d1213f8146df64956d: Status 404 returned error can't find the container with id 699a9e42b14d318dbf181b70efcb0e20ce6af0b5517842d1213f8146df64956d
	Dec 18 23:54:48 multinode-320272 kubelet[1381]: I1218 23:54:48.982915    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fwqn2" podStartSLOduration=33.982870764 podCreationTimestamp="2023-12-18 23:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 23:54:48.96765472 +0000 UTC m=+46.352361489" watchObservedRunningTime="2023-12-18 23:54:48.982870764 +0000 UTC m=+46.367577533"
	Dec 18 23:54:48 multinode-320272 kubelet[1381]: I1218 23:54:48.997051    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.997007357 podCreationTimestamp="2023-12-18 23:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-18 23:54:48.984292104 +0000 UTC m=+46.368998865" watchObservedRunningTime="2023-12-18 23:54:48.997007357 +0000 UTC m=+46.381714118"
	Dec 18 23:55:08 multinode-320272 kubelet[1381]: I1218 23:55:08.349651    1381 topology_manager.go:215] "Topology Admit Handler" podUID="6fc4e0c1-b531-43a9-a4b6-f0e06f930ed2" podNamespace="default" podName="busybox-5bc68d56bd-9rw5h"
	Dec 18 23:55:08 multinode-320272 kubelet[1381]: I1218 23:55:08.442592    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlf2f\" (UniqueName: \"kubernetes.io/projected/6fc4e0c1-b531-43a9-a4b6-f0e06f930ed2-kube-api-access-vlf2f\") pod \"busybox-5bc68d56bd-9rw5h\" (UID: \"6fc4e0c1-b531-43a9-a4b6-f0e06f930ed2\") " pod="default/busybox-5bc68d56bd-9rw5h"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-320272 -n multinode-320272
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-320272 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1944820832.exe start -p running-upgrade-892542 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1944820832.exe start -p running-upgrade-892542 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.123196828s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-892542 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-892542 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.212227441s)

                                                
                                                
-- stdout --
	* [running-upgrade-892542] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-892542 in cluster running-upgrade-892542
	* Pulling base image v0.0.42-1702920864-17822 ...
	* Updating the running docker "running-upgrade-892542" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 00:10:56.448214  943679 out.go:296] Setting OutFile to fd 1 ...
	I1219 00:10:56.448503  943679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:10:56.448531  943679 out.go:309] Setting ErrFile to fd 2...
	I1219 00:10:56.448552  943679 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:10:56.448840  943679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1219 00:10:56.449303  943679 out.go:303] Setting JSON to false
	I1219 00:10:56.450378  943679 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17599,"bootTime":1702927058,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1219 00:10:56.450504  943679 start.go:138] virtualization:  
	I1219 00:10:56.452648  943679 out.go:177] * [running-upgrade-892542] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1219 00:10:56.456974  943679 out.go:177]   - MINIKUBE_LOCATION=17822
	I1219 00:10:56.457131  943679 notify.go:220] Checking for updates...
	I1219 00:10:56.458725  943679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 00:10:56.460368  943679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1219 00:10:56.462041  943679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1219 00:10:56.463829  943679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1219 00:10:56.465367  943679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 00:10:56.467299  943679 config.go:182] Loaded profile config "running-upgrade-892542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:10:56.469356  943679 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1219 00:10:56.470854  943679 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 00:10:56.495769  943679 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1219 00:10:56.496026  943679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:10:56.590039  943679 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-19 00:10:56.579716802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:10:56.590144  943679 docker.go:295] overlay module found
	I1219 00:10:56.592954  943679 out.go:177] * Using the docker driver based on existing profile
	I1219 00:10:56.595385  943679 start.go:298] selected driver: docker
	I1219 00:10:56.595413  943679 start.go:902] validating driver "docker" against &{Name:running-upgrade-892542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-892542 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.127 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:10:56.595504  943679 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 00:10:56.596459  943679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:10:56.727957  943679 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-12-19 00:10:56.718047772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:10:56.728318  943679 cni.go:84] Creating CNI manager for ""
	I1219 00:10:56.728334  943679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 00:10:56.728346  943679 start_flags.go:323] config:
	{Name:running-upgrade-892542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-892542 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.127 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:10:56.731052  943679 out.go:177] * Starting control plane node running-upgrade-892542 in cluster running-upgrade-892542
	I1219 00:10:56.733100  943679 cache.go:121] Beginning downloading kic base image for docker with crio
	I1219 00:10:56.735443  943679 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1219 00:10:56.737747  943679 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1219 00:10:56.737945  943679 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1219 00:10:56.766219  943679 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1219 00:10:56.766241  943679 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1219 00:10:56.816120  943679 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1219 00:10:56.816257  943679 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/running-upgrade-892542/config.json ...
	I1219 00:10:56.816479  943679 cache.go:194] Successfully downloaded all kic artifacts
	I1219 00:10:56.816532  943679 start.go:365] acquiring machines lock for running-upgrade-892542: {Name:mk963526d63c7faf5292988cc165858a1d6802e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.816582  943679 start.go:369] acquired machines lock for "running-upgrade-892542" in 32.09µs
	I1219 00:10:56.816594  943679 start.go:96] Skipping create...Using existing machine configuration
	I1219 00:10:56.816600  943679 fix.go:54] fixHost starting: 
	I1219 00:10:56.816886  943679 cli_runner.go:164] Run: docker container inspect running-upgrade-892542 --format={{.State.Status}}
	I1219 00:10:56.817164  943679 cache.go:107] acquiring lock: {Name:mk576bd7c644b08d543ef918b85b2de80c1bfeac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817233  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 00:10:56.817242  943679 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.798µs
	I1219 00:10:56.817250  943679 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 00:10:56.817259  943679 cache.go:107] acquiring lock: {Name:mk6a0edbf2032ea6fed7dacde06cba37e9e469cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817287  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1219 00:10:56.817292  943679 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 34.043µs
	I1219 00:10:56.817299  943679 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1219 00:10:56.817320  943679 cache.go:107] acquiring lock: {Name:mkab23e3d35fec1fa468f33a063f3242db16b7bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817346  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1219 00:10:56.817353  943679 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.343µs
	I1219 00:10:56.817361  943679 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1219 00:10:56.817370  943679 cache.go:107] acquiring lock: {Name:mka218a63afcbf27ce8f7ca91226c63166259d43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817402  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1219 00:10:56.817406  943679 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 38.195µs
	I1219 00:10:56.817413  943679 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1219 00:10:56.817421  943679 cache.go:107] acquiring lock: {Name:mk1a2082f9724a76557bce92a1883b29a19fd21d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817444  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1219 00:10:56.817448  943679 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 28.996µs
	I1219 00:10:56.817455  943679 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1219 00:10:56.817463  943679 cache.go:107] acquiring lock: {Name:mkb5537e7eeb8c168be22e2f28f537b67fe8b4eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817488  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1219 00:10:56.817493  943679 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.015µs
	I1219 00:10:56.817498  943679 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1219 00:10:56.817506  943679 cache.go:107] acquiring lock: {Name:mkbcea7a3a6c0ed7928f667094049c95862b673e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817529  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1219 00:10:56.817533  943679 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.225µs
	I1219 00:10:56.817539  943679 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1219 00:10:56.817550  943679 cache.go:107] acquiring lock: {Name:mk359d91df641296bb2a42f497431f3b135ffb0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:56.817574  943679 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1219 00:10:56.817578  943679 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 32.008µs
	I1219 00:10:56.817584  943679 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1219 00:10:56.817589  943679 cache.go:87] Successfully saved all images to host disk.
	I1219 00:10:56.853126  943679 fix.go:102] recreateIfNeeded on running-upgrade-892542: state=Running err=<nil>
	W1219 00:10:56.853153  943679 fix.go:128] unexpected machine state, will restart: <nil>
	I1219 00:10:56.855934  943679 out.go:177] * Updating the running docker "running-upgrade-892542" container ...
	I1219 00:10:56.857380  943679 machine.go:88] provisioning docker machine ...
	I1219 00:10:56.857416  943679 ubuntu.go:169] provisioning hostname "running-upgrade-892542"
	I1219 00:10:56.857491  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:56.883669  943679 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:56.884120  943679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33623 <nil> <nil>}
	I1219 00:10:56.884140  943679 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-892542 && echo "running-upgrade-892542" | sudo tee /etc/hostname
	I1219 00:10:57.101407  943679 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-892542
	
	I1219 00:10:57.101565  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:57.136560  943679 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:57.137022  943679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33623 <nil> <nil>}
	I1219 00:10:57.137045  943679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-892542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-892542/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-892542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 00:10:57.321169  943679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 00:10:57.321202  943679 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1219 00:10:57.321227  943679 ubuntu.go:177] setting up certificates
	I1219 00:10:57.321245  943679 provision.go:83] configureAuth start
	I1219 00:10:57.321315  943679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-892542
	I1219 00:10:57.348176  943679 provision.go:138] copyHostCerts
	I1219 00:10:57.348269  943679 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1219 00:10:57.348319  943679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1219 00:10:57.348425  943679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1219 00:10:57.348563  943679 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1219 00:10:57.348570  943679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1219 00:10:57.348616  943679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1219 00:10:57.348692  943679 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1219 00:10:57.348700  943679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1219 00:10:57.348734  943679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1219 00:10:57.348793  943679 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-892542 san=[192.168.70.127 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-892542]
	I1219 00:10:57.786553  943679 provision.go:172] copyRemoteCerts
	I1219 00:10:57.786644  943679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 00:10:57.786701  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:57.812150  943679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33623 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/running-upgrade-892542/id_rsa Username:docker}
	I1219 00:10:57.922964  943679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 00:10:57.958433  943679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1219 00:10:58.015204  943679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 00:10:58.064538  943679 provision.go:86] duration metric: configureAuth took 743.273666ms
	I1219 00:10:58.064583  943679 ubuntu.go:193] setting minikube options for container-runtime
	I1219 00:10:58.064816  943679 config.go:182] Loaded profile config "running-upgrade-892542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:10:58.064977  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:58.111433  943679 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:58.111842  943679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33623 <nil> <nil>}
	I1219 00:10:58.111860  943679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 00:10:59.150866  943679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 00:10:59.150937  943679 machine.go:91] provisioned docker machine in 2.293538663s
	I1219 00:10:59.150963  943679 start.go:300] post-start starting for "running-upgrade-892542" (driver="docker")
	I1219 00:10:59.150985  943679 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 00:10:59.151092  943679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 00:10:59.151180  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:59.191520  943679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33623 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/running-upgrade-892542/id_rsa Username:docker}
	I1219 00:10:59.364680  943679 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 00:10:59.381173  943679 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1219 00:10:59.381195  943679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 00:10:59.381207  943679 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1219 00:10:59.381213  943679 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1219 00:10:59.381224  943679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1219 00:10:59.381280  943679 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1219 00:10:59.381370  943679 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1219 00:10:59.381496  943679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 00:10:59.409368  943679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1219 00:10:59.461007  943679 start.go:303] post-start completed in 310.017242ms
	I1219 00:10:59.461155  943679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 00:10:59.461216  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:59.488212  943679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33623 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/running-upgrade-892542/id_rsa Username:docker}
	I1219 00:10:59.622823  943679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 00:10:59.637466  943679 fix.go:56] fixHost completed within 2.820858314s
	I1219 00:10:59.637504  943679 start.go:83] releasing machines lock for "running-upgrade-892542", held for 2.820913927s
	I1219 00:10:59.637583  943679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-892542
	I1219 00:10:59.670890  943679 ssh_runner.go:195] Run: cat /version.json
	I1219 00:10:59.670965  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:59.671223  943679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 00:10:59.671260  943679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-892542
	I1219 00:10:59.721209  943679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33623 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/running-upgrade-892542/id_rsa Username:docker}
	I1219 00:10:59.731147  943679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33623 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/running-upgrade-892542/id_rsa Username:docker}
	W1219 00:10:59.846221  943679 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1219 00:10:59.846371  943679 ssh_runner.go:195] Run: systemctl --version
	I1219 00:10:59.980904  943679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 00:11:00.448200  943679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1219 00:11:00.464444  943679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:11:00.504293  943679 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1219 00:11:00.504450  943679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:11:00.583515  943679 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 00:11:00.583587  943679 start.go:475] detecting cgroup driver to use...
	I1219 00:11:00.583633  943679 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1219 00:11:00.583715  943679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 00:11:00.651794  943679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 00:11:00.677115  943679 docker.go:203] disabling cri-docker service (if available) ...
	I1219 00:11:00.677193  943679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 00:11:00.691346  943679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 00:11:00.714050  943679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1219 00:11:00.737613  943679 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1219 00:11:00.737691  943679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 00:11:01.025068  943679 docker.go:219] disabling docker service ...
	I1219 00:11:01.025207  943679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 00:11:01.111633  943679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 00:11:01.148752  943679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 00:11:01.644012  943679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 00:11:02.224190  943679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 00:11:02.292410  943679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 00:11:02.504539  943679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1219 00:11:02.504612  943679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 00:11:02.548133  943679 out.go:177] 
	W1219 00:11:02.550369  943679 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1219 00:11:02.550393  943679 out.go:239] * 
	* 
	W1219 00:11:02.551449  943679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 00:11:02.554139  943679 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-892542 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-19 00:11:02.583106156 +0000 UTC m=+2378.899740580
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-892542
helpers_test.go:235: (dbg) docker inspect running-upgrade-892542:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9",
	        "Created": "2023-12-19T00:10:07.377452516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 938041,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-19T00:10:07.817499386Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9/hosts",
	        "LogPath": "/var/lib/docker/containers/69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9/69cbe08c47d9d28ab4779fee160f7550bc9017f3aa969027374e2e59277da2f9-json.log",
	        "Name": "/running-upgrade-892542",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-892542:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-892542",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/14be6bcf78dd670c6121dd6bd077da846263d9f07afd187ebc8a9595167d450c-init/diff:/var/lib/docker/overlay2/545ab4e8d435ac3c481dbf988e91335801700a758f022a1bfbc7d4530f117b02/diff:/var/lib/docker/overlay2/c9d85c44fde0327ddf3d1f9672065baa47dcc115b591d77cffa050b2a186b09c/diff:/var/lib/docker/overlay2/44a52f4c8d644db25bfd040ee6beb4b7bf5e8b9c3d6a15de76c5fd77f598aa22/diff:/var/lib/docker/overlay2/68e2e3c7bb79adb757818b59bffad910e522449b05a5cd5e2d644abfb1edf4bb/diff:/var/lib/docker/overlay2/f2eeec015aea16fc5710cafdb3fc9734c79eb2ec48f66b353afd847b51dd3f26/diff:/var/lib/docker/overlay2/27cba5685359099bd600c7c52ad636eca16d04be48c9eaf2acf34b4165f2627a/diff:/var/lib/docker/overlay2/a37691ab6ff9393aca7c7cc3d9e9f5405ab7467474e0d4367fe5feb98a447e87/diff:/var/lib/docker/overlay2/cc0d344a93d038bda90ee5c6a235434391ad64c07b73302a43a5f34a00b30b54/diff:/var/lib/docker/overlay2/fb0a5a730cbb8fcc260dd2c8734514b8604194312bc5b24ed200fc179eb7eb30/diff:/var/lib/docker/overlay2/495a83
30131d6a7e039673c1a6a9c12b01a22058cc549089efe8c170436adbc1/diff:/var/lib/docker/overlay2/7848ee06c2b50a0643318e3c17d0b3d594351c491911f5cda2c8d0f34e0ac4da/diff:/var/lib/docker/overlay2/44bb997d729b58d60dfcc874b6e2eca833dfe5edfb1e70f6c59e39ccc81d33fe/diff:/var/lib/docker/overlay2/bc47ea959a7b67f3a029abd45173821bef95aed87747d0f3822d76a95edfbc22/diff:/var/lib/docker/overlay2/3bf196ab15ce006172b64f98307af2a445c3a3567be0987a6b958cb402599e72/diff:/var/lib/docker/overlay2/cc6f3afa47ff06b62e4589acc2989101d0200aa1eca76f7551eb3fcbb433b932/diff:/var/lib/docker/overlay2/1081a6d91778803c16553f62894a54f45c376f32f286c6a521f70ba6ebf5aef1/diff:/var/lib/docker/overlay2/fedec831edf487fd2a63d6f429a54f739d418cb81cc0876fa1cc8a1ac23e4f45/diff:/var/lib/docker/overlay2/75c466871a80fa0030bc2de4fb4ac7c40163358fc0bd5b0334ed7069262dc1dc/diff:/var/lib/docker/overlay2/abf0b82afe44b42e1fb76017710deca4df7e61f12c8c8c84f2e9571b098feabe/diff:/var/lib/docker/overlay2/f8651b5dfe5a43d6fc14e3151c7d0fa5e1766de61c397e69a16d2d5860ba3321/diff:/var/lib/d
ocker/overlay2/d2114a899b80d6881c3edf54c30990020f91bc07f5cb4726ec73dbf37cb58f50/diff:/var/lib/docker/overlay2/af67d7b8184cfb6cd370355c3f61821bde5333526ff2742a33024ec9dae56a69/diff:/var/lib/docker/overlay2/2280fc0195452084063de0d1ecf1397c67e431fa21c954bf0721ade727442cd0/diff:/var/lib/docker/overlay2/9b7e3b7cd7a908c178cd92f6cf1476a9ab48fd328e770c02b998ed1d520a0cf4/diff:/var/lib/docker/overlay2/99e4487d67b63113e3b054523d4d514a4752ba2ab15fa984ecd56b926e86e449/diff:/var/lib/docker/overlay2/e3b4ffa3cb70c2420c371b8cfdd0098071855376946d8f0c5d44e07491a73f0d/diff:/var/lib/docker/overlay2/badb994abf2a227c0e9bfe646a25755b6cfe2d24bb27b901ff4941876eb8fd8b/diff:/var/lib/docker/overlay2/13fa4659dbf8d85b97d13a66ff8adc14c1bd46559d17bfeef0b4ac57b7e18906/diff:/var/lib/docker/overlay2/a422fd7e1fc76d04363043fa914e01734a3636d71442fb38b947fbac5e90b30d/diff:/var/lib/docker/overlay2/4154c1cde4073649624aa25628b0caf175c2653ac9b630ba424373766b1512a8/diff:/var/lib/docker/overlay2/0b32bbcd36e9b601108f03ec1d448935e962600320f833b6f0e14cc9d4c
e9163/diff:/var/lib/docker/overlay2/46e750583b83f20f1f9dbb393728f3a5b2399394bba729072cb8ea1eada87ad4/diff:/var/lib/docker/overlay2/40b958b07857412757fd80d879a80e805922581f1650030752baff2cd56d26cb/diff:/var/lib/docker/overlay2/8cb74befbada70a1a0efad69a1b71f3b6adffcad97a3ca185b68e21696dc5f38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/14be6bcf78dd670c6121dd6bd077da846263d9f07afd187ebc8a9595167d450c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/14be6bcf78dd670c6121dd6bd077da846263d9f07afd187ebc8a9595167d450c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/14be6bcf78dd670c6121dd6bd077da846263d9f07afd187ebc8a9595167d450c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-892542",
	                "Source": "/var/lib/docker/volumes/running-upgrade-892542/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-892542",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-892542",
	                "name.minikube.sigs.k8s.io": "running-upgrade-892542",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43f8756590341850705d712a7ef4cb5e583d783f588b45e7e343b1e9998272c5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33622"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33621"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33620"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/43f875659034",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-892542": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.127"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "69cbe08c47d9",
	                        "running-upgrade-892542"
	                    ],
	                    "NetworkID": "fcc56b515c94a7650df42e39a84bc9b18cfe1f787f0be0b9e4a2a1bb0f8cf46a",
	                    "EndpointID": "ea2b102d3f777b82a323d7fd2166abe107673b52a651ce2d6f2dbf3fdcd09cf7",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.127",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:7f",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-892542 -n running-upgrade-892542
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-892542 -n running-upgrade-892542: exit status 4 (783.599228ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 00:11:03.268187  944445 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-892542" does not appear in /home/jenkins/minikube-integration/17822-812008/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-892542" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-892542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-892542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-892542: (3.976582571s)
--- FAIL: TestRunningBinaryUpgrade (77.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3933003098.exe start -p missing-upgrade-686206 --memory=2200 --driver=docker  --container-runtime=crio
E1219 00:06:02.965666  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3933003098.exe start -p missing-upgrade-686206 --memory=2200 --driver=docker  --container-runtime=crio: (2m21.570310796s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-686206
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-686206: (2.258511367s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-686206
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-686206 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-686206 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (35.082178835s)

                                                
                                                
-- stdout --
	* [missing-upgrade-686206] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-686206 in cluster missing-upgrade-686206
	* Pulling base image v0.0.42-1702920864-17822 ...
	* docker "missing-upgrade-686206" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 00:08:08.563466  928243 out.go:296] Setting OutFile to fd 1 ...
	I1219 00:08:08.564166  928243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:08:08.564209  928243 out.go:309] Setting ErrFile to fd 2...
	I1219 00:08:08.564236  928243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:08:08.564549  928243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1219 00:08:08.565102  928243 out.go:303] Setting JSON to false
	I1219 00:08:08.566164  928243 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17431,"bootTime":1702927058,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1219 00:08:08.566286  928243 start.go:138] virtualization:  
	I1219 00:08:08.568958  928243 out.go:177] * [missing-upgrade-686206] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1219 00:08:08.571998  928243 notify.go:220] Checking for updates...
	I1219 00:08:08.571938  928243 out.go:177]   - MINIKUBE_LOCATION=17822
	I1219 00:08:08.575409  928243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 00:08:08.577859  928243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1219 00:08:08.579735  928243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1219 00:08:08.581370  928243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1219 00:08:08.584117  928243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 00:08:08.586967  928243 config.go:182] Loaded profile config "missing-upgrade-686206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:08:08.589762  928243 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1219 00:08:08.591565  928243 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 00:08:08.632322  928243 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1219 00:08:08.632449  928243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:08:08.760694  928243 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-19 00:08:08.750540737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:08:08.760817  928243 docker.go:295] overlay module found
	I1219 00:08:08.763084  928243 out.go:177] * Using the docker driver based on existing profile
	I1219 00:08:08.765193  928243 start.go:298] selected driver: docker
	I1219 00:08:08.765214  928243 start.go:902] validating driver "docker" against &{Name:missing-upgrade-686206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-686206 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.127 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:08:08.765307  928243 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 00:08:08.766132  928243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:08:08.858065  928243 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-12-19 00:08:08.848284822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:08:08.858470  928243 cni.go:84] Creating CNI manager for ""
	I1219 00:08:08.858499  928243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 00:08:08.858513  928243 start_flags.go:323] config:
	{Name:missing-upgrade-686206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-686206 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.127 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:08:08.860588  928243 out.go:177] * Starting control plane node missing-upgrade-686206 in cluster missing-upgrade-686206
	I1219 00:08:08.862700  928243 cache.go:121] Beginning downloading kic base image for docker with crio
	I1219 00:08:08.865468  928243 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1219 00:08:08.867232  928243 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1219 00:08:08.867331  928243 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1219 00:08:08.889849  928243 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1219 00:08:08.890087  928243 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1219 00:08:08.890744  928243 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1219 00:08:08.951927  928243 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1219 00:08:08.952083  928243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/missing-upgrade-686206/config.json ...
	I1219 00:08:08.952430  928243 cache.go:107] acquiring lock: {Name:mk576bd7c644b08d543ef918b85b2de80c1bfeac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.952500  928243 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 00:08:08.952509  928243 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.528µs
	I1219 00:08:08.952517  928243 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 00:08:08.952526  928243 cache.go:107] acquiring lock: {Name:mk6a0edbf2032ea6fed7dacde06cba37e9e469cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.952603  928243 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1219 00:08:08.952773  928243 cache.go:107] acquiring lock: {Name:mkab23e3d35fec1fa468f33a063f3242db16b7bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.952910  928243 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1219 00:08:08.953102  928243 cache.go:107] acquiring lock: {Name:mka218a63afcbf27ce8f7ca91226c63166259d43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.953158  928243 cache.go:107] acquiring lock: {Name:mk359d91df641296bb2a42f497431f3b135ffb0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.953203  928243 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1219 00:08:08.953355  928243 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1219 00:08:08.953580  928243 cache.go:107] acquiring lock: {Name:mk1a2082f9724a76557bce92a1883b29a19fd21d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.953756  928243 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1219 00:08:08.953926  928243 cache.go:107] acquiring lock: {Name:mkb5537e7eeb8c168be22e2f28f537b67fe8b4eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.954122  928243 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1219 00:08:08.954314  928243 cache.go:107] acquiring lock: {Name:mkbcea7a3a6c0ed7928f667094049c95862b673e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:08.954532  928243 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1219 00:08:08.955264  928243 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1219 00:08:08.955456  928243 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1219 00:08:08.955674  928243 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1219 00:08:08.956196  928243 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1219 00:08:08.956679  928243 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1219 00:08:08.957307  928243 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1219 00:08:08.957611  928243 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1219 00:08:09.331981  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1219 00:08:09.333052  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W1219 00:08:09.333193  928243 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1219 00:08:09.333256  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	W1219 00:08:09.335349  928243 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1219 00:08:09.335388  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1219 00:08:09.353724  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1219 00:08:09.374460  928243 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1219 00:08:09.374567  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1219 00:08:09.399916  928243 cache.go:162] opening:  /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1219 00:08:09.473571  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1219 00:08:09.473615  928243 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 519.688261ms
	I1219 00:08:09.473629  928243 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  897.29 KiB / 287.99 MiB [] 0.30% ? p/s ?I1219 00:08:09.996271  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1219 00:08:09.996306  928243 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.043196297s
	I1219 00:08:09.996322  928243 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1219 00:08:10.018190  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1219 00:08:10.018221  928243 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.065122033s
	I1219 00:08:10.018238  928243 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  15.89 MiB / 287.99 MiB  5.52% 26.47 MiB I1219 00:08:10.123444  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1219 00:08:10.123473  928243 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.170945601s
	I1219 00:08:10.123487  928243 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.47 MiB I1219 00:08:10.434867  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1219 00:08:10.434951  928243 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.482179291s
	I1219 00:08:10.434995  928243 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 26.47 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 25.84 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 25.84 MiB I1219 00:08:10.989358  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1219 00:08:10.989388  928243 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.03581211s
	I1219 00:08:10.989403  928243 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 25.84 MiB     > gcr.io/k8s-minikube/kicbase...:  26.92 MiB / 287.99 MiB  9.35% 24.28 MiB     > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 24.28 MiB    > gcr.io/k8s-minikube/kicbase...:  50.09 MiB / 287.99 MiB  17.39% 24.28 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 27.11 MiB    > gcr.io/k8s-minikube/kicbase...:  72.98 MiB / 287.99 MiB  25.34% 27.11 MiB    > gcr.io/k8s-minikube/kicbase...:  94.62 MiB / 287.99 MiB  32.86% 27.11 MiB    > gcr.io/k8s-minikube/kicbase...:  111.06 MiB / 287.99 MiB  38.56% 30.01 Mi    > gcr.io/k8s-minikube/kicbase...:  128.20 MiB / 287.99 MiB  44.51% 30.01 MiI1219 00:08:12.866449  928243 cache.go:157] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1219 00:08:12.867322  928243 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.913005545s
	I1219 00:08:12.867342  928243 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1219 00:08:12.867352  928243 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  150.73 MiB / 287.99 MiB  52.34% 30.01 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 34.60 Mi    > gcr.io/k8s-minikube/kicbase...:  171.79 MiB / 287.99 MiB  59.65% 34.60 Mi    > gcr.io/k8s-minikube/kicbase...:  184.68 MiB / 287.99 MiB  64.13% 34.60 Mi    > gcr.io/k8s-minikube/kicbase...:  195.73 MiB / 287.99 MiB  67.96% 34.95 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 34.95 Mi    > gcr.io/k8s-minikube/kicbase...:  213.89 MiB / 287.99 MiB  74.27% 34.95 Mi    > gcr.io/k8s-minikube/kicbase...:  227.68 MiB / 287.99 MiB  79.06% 36.13 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 36.13 Mi    > gcr.io/k8s-minikube/kicbase...:  246.73 MiB / 287.99 MiB  85.67% 36.13 Mi    > gcr.io/k8s-minikube/kicbase...:  260.04 MiB / 287.99 MiB  90.30% 37.28 Mi    > gcr.io/k8s-minikube/kicbase...:  265.29 MiB / 287.99 MiB  92.12% 37.28 Mi    > gcr.io/k8s-minikube/kicbase...:  282.98 MiB / 287.99 MiB  98.
26% 37.28 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 37.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 37.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 37.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 35.43 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 35.43 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 35.43 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 33.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 33.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 33.15 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 31.01 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 31.01 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 31.01 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB
100.00% 29.01 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 34.02 MI1219 00:08:17.976783  928243 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1219 00:08:17.976817  928243 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1219 00:08:18.119561  928243 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1219 00:08:18.119610  928243 cache.go:194] Successfully downloaded all kic artifacts
	I1219 00:08:18.119673  928243 start.go:365] acquiring machines lock for missing-upgrade-686206: {Name:mkf029e59db8326c6b98e7494c1a5237c1a7e6fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:08:18.119766  928243 start.go:369] acquired machines lock for "missing-upgrade-686206" in 61.99µs
	I1219 00:08:18.119801  928243 start.go:96] Skipping create...Using existing machine configuration
	I1219 00:08:18.119817  928243 fix.go:54] fixHost starting: 
	I1219 00:08:18.120268  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:18.138116  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:18.138188  928243 fix.go:102] recreateIfNeeded on missing-upgrade-686206: state= err=unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:18.138209  928243 fix.go:107] machineExists: false. err=machine does not exist
	I1219 00:08:18.140786  928243 out.go:177] * docker "missing-upgrade-686206" container is missing, will recreate.
	I1219 00:08:18.143124  928243 delete.go:124] DEMOLISHING missing-upgrade-686206 ...
	I1219 00:08:18.143240  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:18.161185  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	W1219 00:08:18.161247  928243 stop.go:75] unable to get state: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:18.161269  928243 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:18.161726  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:18.177990  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:18.178061  928243 delete.go:82] Unable to get host status for missing-upgrade-686206, assuming it has already been deleted: state: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:18.178130  928243 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-686206
	W1219 00:08:18.194635  928243 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-686206 returned with exit code 1
	I1219 00:08:18.194673  928243 kic.go:371] could not find the container missing-upgrade-686206 to remove it. will try anyways
	I1219 00:08:18.194728  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:18.212321  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	W1219 00:08:18.212394  928243 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:18.212459  928243 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-686206 /bin/bash -c "sudo init 0"
	W1219 00:08:18.228926  928243 cli_runner.go:211] docker exec --privileged -t missing-upgrade-686206 /bin/bash -c "sudo init 0" returned with exit code 1
	I1219 00:08:18.228962  928243 oci.go:650] error shutdown missing-upgrade-686206: docker exec --privileged -t missing-upgrade-686206 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:19.229163  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:19.246787  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:19.246862  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:19.246881  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:19.246911  928243 retry.go:31] will retry after 492.76212ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:19.740680  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:19.757123  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:19.757188  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:19.757202  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:19.757227  928243 retry.go:31] will retry after 595.806785ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:20.353866  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:20.372647  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:20.372718  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:20.372728  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:20.372754  928243 retry.go:31] will retry after 1.63882784s: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:22.011806  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:22.034292  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:22.034358  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:22.034367  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:22.034393  928243 retry.go:31] will retry after 1.967111342s: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:24.007862  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:24.029544  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:24.029616  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:24.029630  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:24.029656  928243 retry.go:31] will retry after 2.508840831s: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:26.540121  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:26.560775  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:26.560847  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:26.560858  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:26.560887  928243 retry.go:31] will retry after 2.534139082s: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:29.096123  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:29.115505  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:29.115568  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:29.115581  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:29.115606  928243 retry.go:31] will retry after 4.336718886s: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:33.454009  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:33.471392  928243 cli_runner.go:211] docker container inspect missing-upgrade-686206 --format={{.State.Status}} returned with exit code 1
	I1219 00:08:33.471463  928243 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	I1219 00:08:33.471472  928243 oci.go:664] temporary error: container missing-upgrade-686206 status is  but expect it to be exited
	I1219 00:08:33.471508  928243 oci.go:88] couldn't shut down missing-upgrade-686206 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-686206": docker container inspect missing-upgrade-686206 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-686206
	 
	I1219 00:08:33.471583  928243 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-686206
	I1219 00:08:33.489643  928243 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-686206
	W1219 00:08:33.507803  928243 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-686206 returned with exit code 1
	I1219 00:08:33.507892  928243 cli_runner.go:164] Run: docker network inspect missing-upgrade-686206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 00:08:33.525986  928243 cli_runner.go:164] Run: docker network rm missing-upgrade-686206
	I1219 00:08:33.625677  928243 fix.go:114] Sleeping 1 second for extra luck!
	I1219 00:08:34.625830  928243 start.go:125] createHost starting for "" (driver="docker")
	I1219 00:08:34.628114  928243 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1219 00:08:34.628253  928243 start.go:159] libmachine.API.Create for "missing-upgrade-686206" (driver="docker")
	I1219 00:08:34.628276  928243 client.go:168] LocalClient.Create starting
	I1219 00:08:34.628371  928243 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem
	I1219 00:08:34.628409  928243 main.go:141] libmachine: Decoding PEM data...
	I1219 00:08:34.628427  928243 main.go:141] libmachine: Parsing certificate...
	I1219 00:08:34.628483  928243 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem
	I1219 00:08:34.628508  928243 main.go:141] libmachine: Decoding PEM data...
	I1219 00:08:34.628522  928243 main.go:141] libmachine: Parsing certificate...
	I1219 00:08:34.628779  928243 cli_runner.go:164] Run: docker network inspect missing-upgrade-686206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 00:08:34.647202  928243 cli_runner.go:211] docker network inspect missing-upgrade-686206 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 00:08:34.647291  928243 network_create.go:281] running [docker network inspect missing-upgrade-686206] to gather additional debugging logs...
	I1219 00:08:34.647315  928243 cli_runner.go:164] Run: docker network inspect missing-upgrade-686206
	W1219 00:08:34.678734  928243 cli_runner.go:211] docker network inspect missing-upgrade-686206 returned with exit code 1
	I1219 00:08:34.678776  928243 network_create.go:284] error running [docker network inspect missing-upgrade-686206]: docker network inspect missing-upgrade-686206: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-686206 not found
	I1219 00:08:34.678789  928243 network_create.go:286] output of [docker network inspect missing-upgrade-686206]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-686206 not found
	
	** /stderr **
	I1219 00:08:34.678928  928243 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 00:08:34.700332  928243 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-775245b59831 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:eb:d5:75:4c} reservation:<nil>}
	I1219 00:08:34.700712  928243 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b3740fa51eb7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:5c:22:a4:39} reservation:<nil>}
	I1219 00:08:34.701063  928243 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c33a8280aa9f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:00:5f:16:6c} reservation:<nil>}
	I1219 00:08:34.701510  928243 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002dcc290}
	I1219 00:08:34.701532  928243 network_create.go:124] attempt to create docker network missing-upgrade-686206 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 00:08:34.701592  928243 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-686206 missing-upgrade-686206
	I1219 00:08:34.808228  928243 network_create.go:108] docker network missing-upgrade-686206 192.168.76.0/24 created
	I1219 00:08:34.808264  928243 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-686206" container
	I1219 00:08:34.808350  928243 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 00:08:34.828455  928243 cli_runner.go:164] Run: docker volume create missing-upgrade-686206 --label name.minikube.sigs.k8s.io=missing-upgrade-686206 --label created_by.minikube.sigs.k8s.io=true
	I1219 00:08:34.849176  928243 oci.go:103] Successfully created a docker volume missing-upgrade-686206
	I1219 00:08:34.849266  928243 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-686206-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-686206 --entrypoint /usr/bin/test -v missing-upgrade-686206:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1219 00:08:35.379861  928243 oci.go:107] Successfully prepared a docker volume missing-upgrade-686206
	I1219 00:08:35.379903  928243 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1219 00:08:35.380060  928243 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1219 00:08:35.380169  928243 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 00:08:35.497111  928243 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-686206 --name missing-upgrade-686206 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-686206 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-686206 --network missing-upgrade-686206 --ip 192.168.76.2 --volume missing-upgrade-686206:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1219 00:08:35.908135  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Running}}
	I1219 00:08:35.934710  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	I1219 00:08:35.966219  928243 cli_runner.go:164] Run: docker exec missing-upgrade-686206 stat /var/lib/dpkg/alternatives/iptables
	I1219 00:08:36.069625  928243 oci.go:144] the created container "missing-upgrade-686206" has a running status.
	I1219 00:08:36.069654  928243 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa...
	I1219 00:08:36.941004  928243 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 00:08:36.995940  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	I1219 00:08:37.031922  928243 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 00:08:37.031982  928243 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-686206 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 00:08:37.125791  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	I1219 00:08:37.161544  928243 machine.go:88] provisioning docker machine ...
	I1219 00:08:37.161578  928243 ubuntu.go:169] provisioning hostname "missing-upgrade-686206"
	I1219 00:08:37.161643  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:37.200061  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:37.200503  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:37.200523  928243 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-686206 && echo "missing-upgrade-686206" | sudo tee /etc/hostname
	I1219 00:08:37.386120  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-686206
	
	I1219 00:08:37.386219  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:37.413240  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:37.413648  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:37.413666  928243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-686206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-686206/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-686206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 00:08:37.585044  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 00:08:37.585083  928243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1219 00:08:37.585103  928243 ubuntu.go:177] setting up certificates
	I1219 00:08:37.585119  928243 provision.go:83] configureAuth start
	I1219 00:08:37.585193  928243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-686206
	I1219 00:08:37.619332  928243 provision.go:138] copyHostCerts
	I1219 00:08:37.619394  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1219 00:08:37.619403  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1219 00:08:37.619477  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1219 00:08:37.619581  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1219 00:08:37.619587  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1219 00:08:37.619623  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1219 00:08:37.619674  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1219 00:08:37.619679  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1219 00:08:37.619707  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1219 00:08:37.619750  928243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-686206 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-686206]
	I1219 00:08:39.159824  928243 provision.go:172] copyRemoteCerts
	I1219 00:08:39.159899  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 00:08:39.160105  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:39.189398  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:39.301739  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 00:08:39.331279  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1219 00:08:39.356295  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 00:08:39.379913  928243 provision.go:86] duration metric: configureAuth took 1.79477323s
	I1219 00:08:39.379939  928243 ubuntu.go:193] setting minikube options for container-runtime
	I1219 00:08:39.380142  928243 config.go:182] Loaded profile config "missing-upgrade-686206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:08:39.380247  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:39.398461  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:39.398865  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:39.398882  928243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 00:08:39.992593  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 00:08:39.992612  928243 machine.go:91] provisioned docker machine in 2.831045558s
	I1219 00:08:39.992621  928243 client.go:171] LocalClient.Create took 5.364337679s
	I1219 00:08:39.992634  928243 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-686206" took 5.364381461s
	I1219 00:08:39.992641  928243 start.go:300] post-start starting for "missing-upgrade-686206" (driver="docker")
	I1219 00:08:39.992651  928243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 00:08:39.992712  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 00:08:39.992759  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:40.060345  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:40.194152  928243 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 00:08:40.201333  928243 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1219 00:08:40.201363  928243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 00:08:40.201378  928243 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1219 00:08:40.201385  928243 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1219 00:08:40.201398  928243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1219 00:08:40.201465  928243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1219 00:08:40.201557  928243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1219 00:08:40.201700  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 00:08:40.213903  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1219 00:08:40.265194  928243 start.go:303] post-start completed in 272.538067ms
	I1219 00:08:40.265577  928243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-686206
	I1219 00:08:40.373518  928243 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/missing-upgrade-686206/config.json ...
	I1219 00:08:40.374094  928243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 00:08:40.374166  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:40.405079  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:40.507700  928243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 00:08:40.515678  928243 start.go:128] duration metric: createHost completed in 5.889806573s
	I1219 00:08:40.515821  928243 cli_runner.go:164] Run: docker container inspect missing-upgrade-686206 --format={{.State.Status}}
	W1219 00:08:40.547598  928243 fix.go:128] unexpected machine state, will restart: <nil>
	I1219 00:08:40.547622  928243 machine.go:88] provisioning docker machine ...
	I1219 00:08:40.547638  928243 ubuntu.go:169] provisioning hostname "missing-upgrade-686206"
	I1219 00:08:40.547704  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:40.586456  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:40.586867  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:40.586888  928243 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-686206 && echo "missing-upgrade-686206" | sudo tee /etc/hostname
	I1219 00:08:40.782362  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-686206
	
	I1219 00:08:40.782440  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:40.827207  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:40.827659  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:40.827683  928243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-686206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-686206/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-686206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 00:08:40.993345  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 00:08:40.993373  928243 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1219 00:08:40.993390  928243 ubuntu.go:177] setting up certificates
	I1219 00:08:40.993422  928243 provision.go:83] configureAuth start
	I1219 00:08:40.993523  928243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-686206
	I1219 00:08:41.030292  928243 provision.go:138] copyHostCerts
	I1219 00:08:41.030361  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1219 00:08:41.030375  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1219 00:08:41.030567  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1219 00:08:41.030915  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1219 00:08:41.030930  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1219 00:08:41.030972  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1219 00:08:41.031041  928243 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1219 00:08:41.031046  928243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1219 00:08:41.031076  928243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1219 00:08:41.031121  928243 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-686206 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-686206]
	I1219 00:08:41.372007  928243 provision.go:172] copyRemoteCerts
	I1219 00:08:41.372190  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 00:08:41.372261  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:41.396575  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:41.502815  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 00:08:41.538447  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1219 00:08:41.571707  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 00:08:41.611218  928243 provision.go:86] duration metric: configureAuth took 617.780169ms
	I1219 00:08:41.611253  928243 ubuntu.go:193] setting minikube options for container-runtime
	I1219 00:08:41.611490  928243 config.go:182] Loaded profile config "missing-upgrade-686206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:08:41.611623  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:41.633693  928243 main.go:141] libmachine: Using SSH client type: native
	I1219 00:08:41.634094  928243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33615 <nil> <nil>}
	I1219 00:08:41.634114  928243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 00:08:42.029163  928243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 00:08:42.029193  928243 machine.go:91] provisioned docker machine in 1.481563484s
	I1219 00:08:42.029204  928243 start.go:300] post-start starting for "missing-upgrade-686206" (driver="docker")
	I1219 00:08:42.029239  928243 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 00:08:42.029335  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 00:08:42.029420  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:42.052664  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:42.193594  928243 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 00:08:42.199148  928243 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1219 00:08:42.199176  928243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 00:08:42.199188  928243 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1219 00:08:42.199196  928243 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1219 00:08:42.199211  928243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1219 00:08:42.199281  928243 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1219 00:08:42.199375  928243 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1219 00:08:42.199523  928243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 00:08:42.215709  928243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1219 00:08:42.260348  928243 start.go:303] post-start completed in 231.123278ms
	I1219 00:08:42.260516  928243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 00:08:42.260642  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:42.285270  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:42.386811  928243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 00:08:42.395071  928243 fix.go:56] fixHost completed within 24.275252952s
	I1219 00:08:42.395099  928243 start.go:83] releasing machines lock for "missing-upgrade-686206", held for 24.275314991s
	I1219 00:08:42.395190  928243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-686206
	I1219 00:08:42.427307  928243 ssh_runner.go:195] Run: cat /version.json
	I1219 00:08:42.427447  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:42.427318  928243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 00:08:42.427544  928243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-686206
	I1219 00:08:42.469399  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	I1219 00:08:42.478330  928243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33615 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/missing-upgrade-686206/id_rsa Username:docker}
	W1219 00:08:42.577322  928243 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1219 00:08:42.577449  928243 ssh_runner.go:195] Run: systemctl --version
	I1219 00:08:42.681371  928243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 00:08:42.811910  928243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1219 00:08:42.818663  928243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:08:42.847205  928243 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1219 00:08:42.847288  928243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:08:42.913077  928243 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 00:08:42.913102  928243 start.go:475] detecting cgroup driver to use...
	I1219 00:08:42.913154  928243 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1219 00:08:42.913243  928243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 00:08:42.964376  928243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 00:08:42.978228  928243 docker.go:203] disabling cri-docker service (if available) ...
	I1219 00:08:42.978347  928243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 00:08:42.992325  928243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 00:08:43.007105  928243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1219 00:08:43.022405  928243 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1219 00:08:43.022521  928243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 00:08:43.163532  928243 docker.go:219] disabling docker service ...
	I1219 00:08:43.163653  928243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 00:08:43.179833  928243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 00:08:43.194212  928243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 00:08:43.332180  928243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 00:08:43.471435  928243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 00:08:43.485132  928243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 00:08:43.504990  928243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1219 00:08:43.505092  928243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 00:08:43.525149  928243 out.go:177] 
	W1219 00:08:43.526801  928243 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1219 00:08:43.526962  928243 out.go:239] * 
	* 
	W1219 00:08:43.528021  928243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 00:08:43.530527  928243 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-686206 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-12-19 00:08:43.569787672 +0000 UTC m=+2239.886422112
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-686206
helpers_test.go:235: (dbg) docker inspect missing-upgrade-686206:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b",
	        "Created": "2023-12-19T00:08:35.51966342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 929891,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-19T00:08:35.898269876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b/hosts",
	        "LogPath": "/var/lib/docker/containers/61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b/61bd613a505d244ec701539f4cee514f925f101875b6bb9872b3fd79af474f2b-json.log",
	        "Name": "/missing-upgrade-686206",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-686206:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-686206",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/191c0e135004854df6a0dd65ca48fb431b18bb3e91dd8bc22fe13ed1fc32e463-init/diff:/var/lib/docker/overlay2/545ab4e8d435ac3c481dbf988e91335801700a758f022a1bfbc7d4530f117b02/diff:/var/lib/docker/overlay2/c9d85c44fde0327ddf3d1f9672065baa47dcc115b591d77cffa050b2a186b09c/diff:/var/lib/docker/overlay2/44a52f4c8d644db25bfd040ee6beb4b7bf5e8b9c3d6a15de76c5fd77f598aa22/diff:/var/lib/docker/overlay2/68e2e3c7bb79adb757818b59bffad910e522449b05a5cd5e2d644abfb1edf4bb/diff:/var/lib/docker/overlay2/f2eeec015aea16fc5710cafdb3fc9734c79eb2ec48f66b353afd847b51dd3f26/diff:/var/lib/docker/overlay2/27cba5685359099bd600c7c52ad636eca16d04be48c9eaf2acf34b4165f2627a/diff:/var/lib/docker/overlay2/a37691ab6ff9393aca7c7cc3d9e9f5405ab7467474e0d4367fe5feb98a447e87/diff:/var/lib/docker/overlay2/cc0d344a93d038bda90ee5c6a235434391ad64c07b73302a43a5f34a00b30b54/diff:/var/lib/docker/overlay2/fb0a5a730cbb8fcc260dd2c8734514b8604194312bc5b24ed200fc179eb7eb30/diff:/var/lib/docker/overlay2/495a83
30131d6a7e039673c1a6a9c12b01a22058cc549089efe8c170436adbc1/diff:/var/lib/docker/overlay2/7848ee06c2b50a0643318e3c17d0b3d594351c491911f5cda2c8d0f34e0ac4da/diff:/var/lib/docker/overlay2/44bb997d729b58d60dfcc874b6e2eca833dfe5edfb1e70f6c59e39ccc81d33fe/diff:/var/lib/docker/overlay2/bc47ea959a7b67f3a029abd45173821bef95aed87747d0f3822d76a95edfbc22/diff:/var/lib/docker/overlay2/3bf196ab15ce006172b64f98307af2a445c3a3567be0987a6b958cb402599e72/diff:/var/lib/docker/overlay2/cc6f3afa47ff06b62e4589acc2989101d0200aa1eca76f7551eb3fcbb433b932/diff:/var/lib/docker/overlay2/1081a6d91778803c16553f62894a54f45c376f32f286c6a521f70ba6ebf5aef1/diff:/var/lib/docker/overlay2/fedec831edf487fd2a63d6f429a54f739d418cb81cc0876fa1cc8a1ac23e4f45/diff:/var/lib/docker/overlay2/75c466871a80fa0030bc2de4fb4ac7c40163358fc0bd5b0334ed7069262dc1dc/diff:/var/lib/docker/overlay2/abf0b82afe44b42e1fb76017710deca4df7e61f12c8c8c84f2e9571b098feabe/diff:/var/lib/docker/overlay2/f8651b5dfe5a43d6fc14e3151c7d0fa5e1766de61c397e69a16d2d5860ba3321/diff:/var/lib/d
ocker/overlay2/d2114a899b80d6881c3edf54c30990020f91bc07f5cb4726ec73dbf37cb58f50/diff:/var/lib/docker/overlay2/af67d7b8184cfb6cd370355c3f61821bde5333526ff2742a33024ec9dae56a69/diff:/var/lib/docker/overlay2/2280fc0195452084063de0d1ecf1397c67e431fa21c954bf0721ade727442cd0/diff:/var/lib/docker/overlay2/9b7e3b7cd7a908c178cd92f6cf1476a9ab48fd328e770c02b998ed1d520a0cf4/diff:/var/lib/docker/overlay2/99e4487d67b63113e3b054523d4d514a4752ba2ab15fa984ecd56b926e86e449/diff:/var/lib/docker/overlay2/e3b4ffa3cb70c2420c371b8cfdd0098071855376946d8f0c5d44e07491a73f0d/diff:/var/lib/docker/overlay2/badb994abf2a227c0e9bfe646a25755b6cfe2d24bb27b901ff4941876eb8fd8b/diff:/var/lib/docker/overlay2/13fa4659dbf8d85b97d13a66ff8adc14c1bd46559d17bfeef0b4ac57b7e18906/diff:/var/lib/docker/overlay2/a422fd7e1fc76d04363043fa914e01734a3636d71442fb38b947fbac5e90b30d/diff:/var/lib/docker/overlay2/4154c1cde4073649624aa25628b0caf175c2653ac9b630ba424373766b1512a8/diff:/var/lib/docker/overlay2/0b32bbcd36e9b601108f03ec1d448935e962600320f833b6f0e14cc9d4c
e9163/diff:/var/lib/docker/overlay2/46e750583b83f20f1f9dbb393728f3a5b2399394bba729072cb8ea1eada87ad4/diff:/var/lib/docker/overlay2/40b958b07857412757fd80d879a80e805922581f1650030752baff2cd56d26cb/diff:/var/lib/docker/overlay2/8cb74befbada70a1a0efad69a1b71f3b6adffcad97a3ca185b68e21696dc5f38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/191c0e135004854df6a0dd65ca48fb431b18bb3e91dd8bc22fe13ed1fc32e463/merged",
	                "UpperDir": "/var/lib/docker/overlay2/191c0e135004854df6a0dd65ca48fb431b18bb3e91dd8bc22fe13ed1fc32e463/diff",
	                "WorkDir": "/var/lib/docker/overlay2/191c0e135004854df6a0dd65ca48fb431b18bb3e91dd8bc22fe13ed1fc32e463/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-686206",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-686206/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-686206",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-686206",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-686206",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ac6877130e3cf59363c3d5eb68f0bdc4f35cbeeacb0ece658bcd97ca01f3a77",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33615"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33614"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33611"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33613"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33612"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7ac6877130e3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-686206": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61bd613a505d",
	                        "missing-upgrade-686206"
	                    ],
	                    "NetworkID": "2c04f139f3bf4575eb3145df6e3df512b2a0881804c1ed04c299580b88e5439e",
	                    "EndpointID": "bacef6337b08dfbaca545049ad03c339ef97adcd646c807523508769d8edee61",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-686206 -n missing-upgrade-686206
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-686206 -n missing-upgrade-686206: exit status 6 (525.530621ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 00:08:44.130727  931472 status.go:415] kubeconfig endpoint: got: 192.168.59.127:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-686206" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-686206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-686206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-686206: (2.155916853s)
--- FAIL: TestMissingContainerUpgrade (185.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1104713867.exe start -p stopped-upgrade-345510 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1104713867.exe start -p stopped-upgrade-345510 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m12.002294545s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1104713867.exe -p stopped-upgrade-345510 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1104713867.exe -p stopped-upgrade-345510 stop: (20.368691704s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-345510 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-345510 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.994442972s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-345510] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-345510 in cluster stopped-upgrade-345510
	* Pulling base image v0.0.42-1702920864-17822 ...
	* Restarting existing docker container for "stopped-upgrade-345510" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 00:10:19.950017  939065 out.go:296] Setting OutFile to fd 1 ...
	I1219 00:10:19.950259  939065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:10:19.950286  939065 out.go:309] Setting ErrFile to fd 2...
	I1219 00:10:19.950306  939065 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:10:19.950588  939065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1219 00:10:19.950995  939065 out.go:303] Setting JSON to false
	I1219 00:10:19.951942  939065 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17562,"bootTime":1702927058,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1219 00:10:19.952059  939065 start.go:138] virtualization:  
	I1219 00:10:19.955261  939065 out.go:177] * [stopped-upgrade-345510] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1219 00:10:19.957161  939065 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1219 00:10:19.970210  939065 out.go:177]   - MINIKUBE_LOCATION=17822
	I1219 00:10:19.971880  939065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 00:10:19.970145  939065 notify.go:220] Checking for updates...
	I1219 00:10:19.975916  939065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1219 00:10:19.977677  939065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1219 00:10:19.979650  939065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1219 00:10:19.981239  939065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 00:10:19.983778  939065 config.go:182] Loaded profile config "stopped-upgrade-345510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:10:19.985883  939065 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1219 00:10:19.988014  939065 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 00:10:20.050260  939065 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1219 00:10:20.050403  939065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:10:20.229652  939065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-19 00:10:20.213271199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:10:20.229753  939065 docker.go:295] overlay module found
	I1219 00:10:20.231984  939065 out.go:177] * Using the docker driver based on existing profile
	I1219 00:10:20.234026  939065 start.go:298] selected driver: docker
	I1219 00:10:20.234043  939065 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-345510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-345510 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:10:20.234139  939065 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 00:10:20.234770  939065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:10:20.253888  939065 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1219 00:10:20.348806  939065 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-19 00:10:20.334633985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:10:20.349175  939065 cni.go:84] Creating CNI manager for ""
	I1219 00:10:20.349195  939065 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 00:10:20.349207  939065 start_flags.go:323] config:
	{Name:stopped-upgrade-345510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-345510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.167 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 00:10:20.352446  939065 out.go:177] * Starting control plane node stopped-upgrade-345510 in cluster stopped-upgrade-345510
	I1219 00:10:20.354085  939065 cache.go:121] Beginning downloading kic base image for docker with crio
	I1219 00:10:20.355830  939065 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1219 00:10:20.357682  939065 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1219 00:10:20.357846  939065 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1219 00:10:20.387776  939065 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1219 00:10:20.387809  939065 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1219 00:10:20.437643  939065 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1219 00:10:20.437784  939065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/stopped-upgrade-345510/config.json ...
	I1219 00:10:20.438033  939065 cache.go:194] Successfully downloaded all kic artifacts
	I1219 00:10:20.438090  939065 start.go:365] acquiring machines lock for stopped-upgrade-345510: {Name:mk765a7bd9110e5520ec779bd55bed90778de158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438158  939065 start.go:369] acquired machines lock for "stopped-upgrade-345510" in 33.493µs
	I1219 00:10:20.438175  939065 start.go:96] Skipping create...Using existing machine configuration
	I1219 00:10:20.438185  939065 fix.go:54] fixHost starting: 
	I1219 00:10:20.438467  939065 cli_runner.go:164] Run: docker container inspect stopped-upgrade-345510 --format={{.State.Status}}
	I1219 00:10:20.438646  939065 cache.go:107] acquiring lock: {Name:mk576bd7c644b08d543ef918b85b2de80c1bfeac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438710  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 00:10:20.438724  939065 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.672µs
	I1219 00:10:20.438758  939065 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 00:10:20.438775  939065 cache.go:107] acquiring lock: {Name:mk6a0edbf2032ea6fed7dacde06cba37e9e469cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438810  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1219 00:10:20.438819  939065 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 49.353µs
	I1219 00:10:20.438827  939065 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1219 00:10:20.438841  939065 cache.go:107] acquiring lock: {Name:mkab23e3d35fec1fa468f33a063f3242db16b7bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438871  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1219 00:10:20.438881  939065 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 41.214µs
	I1219 00:10:20.438888  939065 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1219 00:10:20.438897  939065 cache.go:107] acquiring lock: {Name:mka218a63afcbf27ce8f7ca91226c63166259d43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438926  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1219 00:10:20.438934  939065 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 38.662µs
	I1219 00:10:20.438941  939065 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1219 00:10:20.438950  939065 cache.go:107] acquiring lock: {Name:mk1a2082f9724a76557bce92a1883b29a19fd21d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.438985  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1219 00:10:20.438994  939065 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 44.914µs
	I1219 00:10:20.439001  939065 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1219 00:10:20.439010  939065 cache.go:107] acquiring lock: {Name:mkb5537e7eeb8c168be22e2f28f537b67fe8b4eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.439038  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1219 00:10:20.439048  939065 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 37.07µs
	I1219 00:10:20.439055  939065 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1219 00:10:20.439063  939065 cache.go:107] acquiring lock: {Name:mkbcea7a3a6c0ed7928f667094049c95862b673e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.439092  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1219 00:10:20.439101  939065 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 38.646µs
	I1219 00:10:20.439108  939065 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1219 00:10:20.439116  939065 cache.go:107] acquiring lock: {Name:mk359d91df641296bb2a42f497431f3b135ffb0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 00:10:20.439144  939065 cache.go:115] /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1219 00:10:20.439152  939065 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 36.479µs
	I1219 00:10:20.439158  939065 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1219 00:10:20.439163  939065 cache.go:87] Successfully saved all images to host disk.
	I1219 00:10:20.458787  939065 fix.go:102] recreateIfNeeded on stopped-upgrade-345510: state=Stopped err=<nil>
	W1219 00:10:20.458820  939065 fix.go:128] unexpected machine state, will restart: <nil>
	I1219 00:10:20.461093  939065 out.go:177] * Restarting existing docker container for "stopped-upgrade-345510" ...
	I1219 00:10:20.462714  939065 cli_runner.go:164] Run: docker start stopped-upgrade-345510
	I1219 00:10:20.829986  939065 cli_runner.go:164] Run: docker container inspect stopped-upgrade-345510 --format={{.State.Status}}
	I1219 00:10:20.864142  939065 kic.go:430] container "stopped-upgrade-345510" state is running.
	I1219 00:10:20.864553  939065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-345510
	I1219 00:10:20.897512  939065 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/stopped-upgrade-345510/config.json ...
	I1219 00:10:20.899000  939065 machine.go:88] provisioning docker machine ...
	I1219 00:10:20.899026  939065 ubuntu.go:169] provisioning hostname "stopped-upgrade-345510"
	I1219 00:10:20.899090  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:20.928239  939065 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:20.928664  939065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33627 <nil> <nil>}
	I1219 00:10:20.928685  939065 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-345510 && echo "stopped-upgrade-345510" | sudo tee /etc/hostname
	I1219 00:10:20.932170  939065 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 00:10:24.099585  939065 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-345510
	
	I1219 00:10:24.099735  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:24.128520  939065 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:24.128942  939065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33627 <nil> <nil>}
	I1219 00:10:24.128961  939065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-345510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-345510/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-345510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 00:10:24.274347  939065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 00:10:24.274444  939065 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-812008/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-812008/.minikube}
	I1219 00:10:24.274506  939065 ubuntu.go:177] setting up certificates
	I1219 00:10:24.274553  939065 provision.go:83] configureAuth start
	I1219 00:10:24.274647  939065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-345510
	I1219 00:10:24.307149  939065 provision.go:138] copyHostCerts
	I1219 00:10:24.307247  939065 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem, removing ...
	I1219 00:10:24.307275  939065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem
	I1219 00:10:24.307366  939065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/cert.pem (1123 bytes)
	I1219 00:10:24.307500  939065 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem, removing ...
	I1219 00:10:24.307512  939065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem
	I1219 00:10:24.307557  939065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/key.pem (1679 bytes)
	I1219 00:10:24.307635  939065 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem, removing ...
	I1219 00:10:24.307640  939065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem
	I1219 00:10:24.307667  939065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-812008/.minikube/ca.pem (1078 bytes)
	I1219 00:10:24.307753  939065 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-345510 san=[192.168.59.167 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-345510]
	I1219 00:10:24.730510  939065 provision.go:172] copyRemoteCerts
	I1219 00:10:24.730580  939065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 00:10:24.730624  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:24.756752  939065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33627 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/stopped-upgrade-345510/id_rsa Username:docker}
	I1219 00:10:24.857133  939065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 00:10:24.881348  939065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1219 00:10:24.906290  939065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 00:10:24.930349  939065 provision.go:86] duration metric: configureAuth took 655.748562ms
	I1219 00:10:24.930384  939065 ubuntu.go:193] setting minikube options for container-runtime
	I1219 00:10:24.930585  939065 config.go:182] Loaded profile config "stopped-upgrade-345510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1219 00:10:24.930701  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:24.957362  939065 main.go:141] libmachine: Using SSH client type: native
	I1219 00:10:24.957776  939065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 33627 <nil> <nil>}
	I1219 00:10:24.957791  939065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 00:10:25.437272  939065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 00:10:25.437336  939065 machine.go:91] provisioned docker machine in 4.538318291s
	I1219 00:10:25.437362  939065 start.go:300] post-start starting for "stopped-upgrade-345510" (driver="docker")
	I1219 00:10:25.437393  939065 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 00:10:25.437478  939065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 00:10:25.437543  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:25.463992  939065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33627 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/stopped-upgrade-345510/id_rsa Username:docker}
	I1219 00:10:25.569652  939065 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 00:10:25.575787  939065 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1219 00:10:25.575824  939065 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 00:10:25.575843  939065 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1219 00:10:25.575852  939065 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1219 00:10:25.575870  939065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/addons for local assets ...
	I1219 00:10:25.575934  939065 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-812008/.minikube/files for local assets ...
	I1219 00:10:25.576118  939065 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem -> 8173782.pem in /etc/ssl/certs
	I1219 00:10:25.576274  939065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 00:10:25.588402  939065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/ssl/certs/8173782.pem --> /etc/ssl/certs/8173782.pem (1708 bytes)
	I1219 00:10:25.620315  939065 start.go:303] post-start completed in 182.916655ms
	I1219 00:10:25.620446  939065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 00:10:25.620527  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:25.654365  939065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33627 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/stopped-upgrade-345510/id_rsa Username:docker}
	I1219 00:10:25.759035  939065 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 00:10:25.765475  939065 fix.go:56] fixHost completed within 5.327280667s
	I1219 00:10:25.765501  939065 start.go:83] releasing machines lock for "stopped-upgrade-345510", held for 5.327330423s
	I1219 00:10:25.765577  939065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-345510
	I1219 00:10:25.790080  939065 ssh_runner.go:195] Run: cat /version.json
	I1219 00:10:25.790137  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:25.790383  939065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 00:10:25.790447  939065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-345510
	I1219 00:10:25.825335  939065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33627 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/stopped-upgrade-345510/id_rsa Username:docker}
	I1219 00:10:25.848990  939065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33627 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/stopped-upgrade-345510/id_rsa Username:docker}
	W1219 00:10:25.932980  939065 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1219 00:10:25.933077  939065 ssh_runner.go:195] Run: systemctl --version
	I1219 00:10:26.011309  939065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 00:10:26.148479  939065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1219 00:10:26.158365  939065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:10:26.196096  939065 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1219 00:10:26.196218  939065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 00:10:26.253878  939065 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 00:10:26.253917  939065 start.go:475] detecting cgroup driver to use...
	I1219 00:10:26.253961  939065 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1219 00:10:26.254032  939065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 00:10:26.295018  939065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 00:10:26.309304  939065 docker.go:203] disabling cri-docker service (if available) ...
	I1219 00:10:26.309427  939065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 00:10:26.323345  939065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 00:10:26.337537  939065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1219 00:10:26.352726  939065 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1219 00:10:26.352838  939065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 00:10:26.486863  939065 docker.go:219] disabling docker service ...
	I1219 00:10:26.487004  939065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 00:10:26.503710  939065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 00:10:26.522739  939065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 00:10:26.663655  939065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 00:10:26.794380  939065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 00:10:26.807202  939065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 00:10:26.831268  939065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1219 00:10:26.831363  939065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 00:10:26.845591  939065 out.go:177] 
	W1219 00:10:26.847100  939065 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1219 00:10:26.847121  939065 out.go:239] * 
	* 
	W1219 00:10:26.848434  939065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 00:10:26.850824  939065 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-345510 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (99.37s)

                                                
                                    

Test pass (277/315)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 13.16
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 4.78
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.41
26 TestBinaryMirror 0.63
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
32 TestAddons/Setup 153.69
34 TestAddons/parallel/Registry 15.63
36 TestAddons/parallel/InspektorGadget 11.93
37 TestAddons/parallel/MetricsServer 6.92
40 TestAddons/parallel/CSI 62.83
41 TestAddons/parallel/Headlamp 11.5
42 TestAddons/parallel/CloudSpanner 6.76
43 TestAddons/parallel/LocalPath 53.58
44 TestAddons/parallel/NvidiaDevicePlugin 6.56
47 TestAddons/serial/GCPAuth/Namespaces 0.18
48 TestAddons/StoppedEnableDisable 12.27
49 TestCertOptions 34.94
50 TestCertExpiration 254.39
52 TestForceSystemdFlag 41.02
53 TestForceSystemdEnv 43.94
59 TestErrorSpam/setup 31.39
60 TestErrorSpam/start 0.87
61 TestErrorSpam/status 1.17
62 TestErrorSpam/pause 1.9
63 TestErrorSpam/unpause 2.03
64 TestErrorSpam/stop 1.48
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 52.17
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 38.91
71 TestFunctional/serial/KubeContext 0.09
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.89
76 TestFunctional/serial/CacheCmd/cache/add_local 1.11
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.09
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.29
81 TestFunctional/serial/CacheCmd/cache/delete 0.17
82 TestFunctional/serial/MinikubeKubectlCmd 0.18
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 33.39
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.85
87 TestFunctional/serial/LogsFileCmd 1.98
88 TestFunctional/serial/InvalidService 4.81
90 TestFunctional/parallel/ConfigCmd 0.57
91 TestFunctional/parallel/DashboardCmd 14.67
92 TestFunctional/parallel/DryRun 0.56
93 TestFunctional/parallel/InternationalLanguage 0.23
94 TestFunctional/parallel/StatusCmd 1.31
98 TestFunctional/parallel/ServiceCmdConnect 10.74
99 TestFunctional/parallel/AddonsCmd 0.26
100 TestFunctional/parallel/PersistentVolumeClaim 25.86
102 TestFunctional/parallel/SSHCmd 0.84
103 TestFunctional/parallel/CpCmd 2.84
105 TestFunctional/parallel/FileSync 0.5
106 TestFunctional/parallel/CertSync 2.42
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.99
114 TestFunctional/parallel/License 0.34
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.55
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
128 TestFunctional/parallel/ProfileCmd/profile_list 0.45
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 8.01
131 TestFunctional/parallel/ServiceCmd/List 0.69
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
134 TestFunctional/parallel/ServiceCmd/Format 0.44
135 TestFunctional/parallel/ServiceCmd/URL 0.44
136 TestFunctional/parallel/MountCmd/specific-port 2.61
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.21
138 TestFunctional/parallel/Version/short 0.09
139 TestFunctional/parallel/Version/components 1.06
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.49
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.43
144 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
145 TestFunctional/parallel/ImageCommands/Setup 2.42
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.03
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.5
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.39
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.01
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.03
156 TestFunctional/delete_addon-resizer_images 0.08
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 97.1
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.44
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.7
169 TestJSONOutput/start/Command 49.76
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.85
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.76
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.91
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.35
194 TestKicCustomNetwork/create_custom_network 47.78
195 TestKicCustomNetwork/use_default_bridge_network 37.38
196 TestKicExistingNetwork 35.36
197 TestKicCustomSubnet 36.21
198 TestKicStaticIP 32.09
199 TestMainNoArgs 0.07
200 TestMinikubeProfile 69.56
203 TestMountStart/serial/StartWithMountFirst 7.58
204 TestMountStart/serial/VerifyMountFirst 0.32
205 TestMountStart/serial/StartWithMountSecond 7.6
206 TestMountStart/serial/VerifyMountSecond 0.3
207 TestMountStart/serial/DeleteFirst 1.67
208 TestMountStart/serial/VerifyMountPostDelete 0.3
209 TestMountStart/serial/Stop 1.22
210 TestMountStart/serial/RestartStopped 8.07
211 TestMountStart/serial/VerifyMountPostStop 0.31
214 TestMultiNode/serial/FreshStart2Nodes 95.08
215 TestMultiNode/serial/DeployApp2Nodes 4.93
217 TestMultiNode/serial/AddNode 23.74
218 TestMultiNode/serial/MultiNodeLabels 0.09
219 TestMultiNode/serial/ProfileList 0.36
220 TestMultiNode/serial/CopyFile 11.75
221 TestMultiNode/serial/StopNode 2.45
222 TestMultiNode/serial/StartAfterStop 12.6
223 TestMultiNode/serial/RestartKeepsNodes 122.3
224 TestMultiNode/serial/DeleteNode 5.22
225 TestMultiNode/serial/StopMultiNode 23.99
226 TestMultiNode/serial/RestartMultiNode 83.01
227 TestMultiNode/serial/ValidateNameConflict 37.04
232 TestPreload 172.86
234 TestScheduledStopUnix 110.49
237 TestInsufficientStorage 11.46
240 TestKubernetesUpgrade 151.4
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
244 TestNoKubernetes/serial/StartWithK8s 44.33
245 TestNoKubernetes/serial/StartWithStopK8s 30.01
246 TestNoKubernetes/serial/Start 10.2
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
248 TestNoKubernetes/serial/ProfileList 1.06
249 TestNoKubernetes/serial/Stop 1.3
250 TestNoKubernetes/serial/StartNoArgs 8.05
251 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
252 TestStoppedBinaryUpgrade/Setup 1.18
254 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
263 TestPause/serial/Start 61.53
264 TestPause/serial/SecondStartNoReconfiguration 27.86
272 TestNetworkPlugins/group/false 4.5
276 TestPause/serial/Pause 1.26
277 TestPause/serial/VerifyStatus 0.44
278 TestPause/serial/Unpause 0.94
279 TestPause/serial/PauseAgain 1.46
280 TestPause/serial/DeletePaused 3.45
281 TestPause/serial/VerifyDeletedResources 0.24
283 TestStartStop/group/old-k8s-version/serial/FirstStart 137.37
284 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
286 TestStartStop/group/old-k8s-version/serial/Stop 12.16
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
288 TestStartStop/group/old-k8s-version/serial/SecondStart 452.67
290 TestStartStop/group/no-preload/serial/FirstStart 69.9
291 TestStartStop/group/no-preload/serial/DeployApp 9.33
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
293 TestStartStop/group/no-preload/serial/Stop 12.02
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
295 TestStartStop/group/no-preload/serial/SecondStart 345.78
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.15
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
300 TestStartStop/group/old-k8s-version/serial/Pause 3.8
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
303 TestStartStop/group/embed-certs/serial/FirstStart 89.86
304 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
305 TestStartStop/group/no-preload/serial/Pause 4.46
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.78
308 TestStartStop/group/embed-certs/serial/DeployApp 9.37
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
311 TestStartStop/group/embed-certs/serial/Stop 12.05
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.03
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/embed-certs/serial/SecondStart 348.33
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 631.51
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.17
320 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
321 TestStartStop/group/embed-certs/serial/Pause 5.27
323 TestStartStop/group/newest-cni/serial/FirstStart 60.29
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.81
326 TestStartStop/group/newest-cni/serial/Stop 1.38
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
328 TestStartStop/group/newest-cni/serial/SecondStart 31.89
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
332 TestStartStop/group/newest-cni/serial/Pause 3.31
333 TestNetworkPlugins/group/auto/Start 80.47
334 TestNetworkPlugins/group/auto/KubeletFlags 0.39
335 TestNetworkPlugins/group/auto/NetCatPod 11.28
336 TestNetworkPlugins/group/auto/DNS 0.2
337 TestNetworkPlugins/group/auto/Localhost 0.18
338 TestNetworkPlugins/group/auto/HairPin 0.18
339 TestNetworkPlugins/group/kindnet/Start 78.21
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
344 TestNetworkPlugins/group/calico/Start 78.84
345 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
346 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
347 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
348 TestNetworkPlugins/group/kindnet/DNS 0.24
349 TestNetworkPlugins/group/kindnet/Localhost 0.23
350 TestNetworkPlugins/group/kindnet/HairPin 0.22
351 TestNetworkPlugins/group/custom-flannel/Start 73.67
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.47
354 TestNetworkPlugins/group/calico/NetCatPod 13.29
355 TestNetworkPlugins/group/calico/DNS 0.23
356 TestNetworkPlugins/group/calico/Localhost 0.23
357 TestNetworkPlugins/group/calico/HairPin 0.18
358 TestNetworkPlugins/group/enable-default-cni/Start 88.61
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
361 TestNetworkPlugins/group/custom-flannel/DNS 0.31
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
364 TestNetworkPlugins/group/flannel/Start 70.27
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/bridge/Start 91.51
372 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
373 TestNetworkPlugins/group/flannel/NetCatPod 11.36
374 TestNetworkPlugins/group/flannel/DNS 0.25
375 TestNetworkPlugins/group/flannel/Localhost 0.23
376 TestNetworkPlugins/group/flannel/HairPin 0.23
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
378 TestNetworkPlugins/group/bridge/NetCatPod 9.28
379 TestNetworkPlugins/group/bridge/DNS 0.2
380 TestNetworkPlugins/group/bridge/Localhost 0.18
381 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (17.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.516186521s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-162657
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-162657: exit status 85 (89.662607ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:31:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:31:23.803589  817383 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:31:23.803812  817383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:23.803838  817383 out.go:309] Setting ErrFile to fd 2...
	I1218 23:31:23.803855  817383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:23.804157  817383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	W1218 23:31:23.804400  817383 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: no such file or directory
	I1218 23:31:23.804956  817383 out.go:303] Setting JSON to true
	I1218 23:31:23.805859  817383 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15226,"bootTime":1702927058,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:31:23.805966  817383 start.go:138] virtualization:  
	I1218 23:31:23.808638  817383 out.go:97] [download-only-162657] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:31:23.810513  817383 out.go:169] MINIKUBE_LOCATION=17822
	W1218 23:31:23.808944  817383 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 23:31:23.809014  817383 notify.go:220] Checking for updates...
	I1218 23:31:23.812536  817383 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:31:23.814482  817383 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:31:23.816226  817383 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:31:23.818201  817383 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 23:31:23.821524  817383 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 23:31:23.821802  817383 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:31:23.846399  817383 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:31:23.846520  817383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:31:23.931185  817383 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-18 23:31:23.921793562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:31:23.931281  817383 docker.go:295] overlay module found
	I1218 23:31:23.933192  817383 out.go:97] Using the docker driver based on user configuration
	I1218 23:31:23.933217  817383 start.go:298] selected driver: docker
	I1218 23:31:23.933229  817383 start.go:902] validating driver "docker" against <nil>
	I1218 23:31:23.933334  817383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:31:23.997932  817383 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-18 23:31:23.988453994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:31:23.998103  817383 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 23:31:23.998377  817383 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1218 23:31:23.998540  817383 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 23:31:24.004284  817383 out.go:169] Using Docker driver with root privileges
	I1218 23:31:24.006900  817383 cni.go:84] Creating CNI manager for ""
	I1218 23:31:24.006938  817383 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:31:24.006950  817383 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1218 23:31:24.006967  817383 start_flags.go:323] config:
	{Name:download-only-162657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-162657 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:31:24.009434  817383 out.go:97] Starting control plane node download-only-162657 in cluster download-only-162657
	I1218 23:31:24.009480  817383 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:31:24.011457  817383 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:31:24.011516  817383 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1218 23:31:24.011630  817383 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:31:24.030317  817383 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:31:24.030966  817383 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:31:24.031104  817383 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:31:24.083766  817383 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1218 23:31:24.083810  817383 cache.go:56] Caching tarball of preloaded images
	I1218 23:31:24.084023  817383 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1218 23:31:24.086501  817383 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 23:31:24.086526  817383 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1218 23:31:24.222038  817383 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1218 23:31:29.560488  817383 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-162657"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.158782574s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-162657
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-162657: exit status 85 (93.487279ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:31:41
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:31:41.413346  817459 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:31:41.413505  817459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:41.413516  817459 out.go:309] Setting ErrFile to fd 2...
	I1218 23:31:41.413522  817459 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:41.413783  817459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	W1218 23:31:41.413931  817459 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: no such file or directory
	I1218 23:31:41.414172  817459 out.go:303] Setting JSON to true
	I1218 23:31:41.414993  817459 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15244,"bootTime":1702927058,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:31:41.415068  817459 start.go:138] virtualization:  
	I1218 23:31:41.417198  817459 out.go:97] [download-only-162657] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:31:41.418982  817459 out.go:169] MINIKUBE_LOCATION=17822
	I1218 23:31:41.417528  817459 notify.go:220] Checking for updates...
	I1218 23:31:41.421153  817459 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:31:41.422900  817459 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:31:41.424818  817459 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:31:41.426482  817459 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 23:31:41.429660  817459 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 23:31:41.430248  817459 config.go:182] Loaded profile config "download-only-162657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1218 23:31:41.430334  817459 start.go:810] api.Load failed for download-only-162657: filestore "download-only-162657": Docker machine "download-only-162657" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:31:41.430439  817459 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 23:31:41.430468  817459 start.go:810] api.Load failed for download-only-162657: filestore "download-only-162657": Docker machine "download-only-162657" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 23:31:41.454477  817459 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:31:41.454595  817459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:31:41.537833  817459 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 23:31:41.526587844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:31:41.537937  817459 docker.go:295] overlay module found
	I1218 23:31:41.539695  817459 out.go:97] Using the docker driver based on existing profile
	I1218 23:31:41.539717  817459 start.go:298] selected driver: docker
	I1218 23:31:41.539723  817459 start.go:902] validating driver "docker" against &{Name:download-only-162657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-162657 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:31:41.539885  817459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:31:41.604920  817459 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 23:31:41.595668086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:31:41.605412  817459 cni.go:84] Creating CNI manager for ""
	I1218 23:31:41.605431  817459 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1218 23:31:41.605444  817459 start_flags.go:323] config:
	{Name:download-only-162657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-162657 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1218 23:31:41.607371  817459 out.go:97] Starting control plane node download-only-162657 in cluster download-only-162657
	I1218 23:31:41.607395  817459 cache.go:121] Beginning downloading kic base image for docker with crio
	I1218 23:31:41.608904  817459 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 23:31:41.608931  817459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:31:41.608970  817459 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 23:31:41.625662  817459 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 23:31:41.625826  817459 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 23:31:41.625845  817459 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 23:31:41.625850  817459 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 23:31:41.625857  817459 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 23:31:41.695707  817459 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1218 23:31:41.695744  817459 cache.go:56] Caching tarball of preloaded images
	I1218 23:31:41.696411  817459 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1218 23:31:41.698262  817459 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 23:31:41.698281  817459 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1218 23:31:41.825506  817459 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17822-812008/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-162657"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-162657 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.778685337s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
--- PASS: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-162657
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-162657: exit status 85 (93.887398ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-162657 | jenkins | v1.32.0 | 18 Dec 23 23:31 UTC |          |
	|         | -p download-only-162657           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 23:31:54
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 23:31:54.669213  817532 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:31:54.669462  817532 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:54.669487  817532 out.go:309] Setting ErrFile to fd 2...
	I1218 23:31:54.669508  817532 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:31:54.669788  817532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	W1218 23:31:54.669949  817532 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-812008/.minikube/config/config.json: no such file or directory
	I1218 23:31:54.670235  817532 out.go:303] Setting JSON to true
	I1218 23:31:54.671124  817532 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15257,"bootTime":1702927058,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:31:54.671221  817532 start.go:138] virtualization:  
	I1218 23:31:54.673526  817532 out.go:97] [download-only-162657] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:31:54.673872  817532 notify.go:220] Checking for updates...
	I1218 23:31:54.676364  817532 out.go:169] MINIKUBE_LOCATION=17822
	I1218 23:31:54.678119  817532 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:31:54.679924  817532 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:31:54.681766  817532 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:31:54.683215  817532 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-162657"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-162657
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-090650 --alsologtostderr --binary-mirror http://127.0.0.1:33079 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-090650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-090650
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-045387
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-045387: exit status 85 (98.304378ms)

                                                
                                                
-- stdout --
	* Profile "addons-045387" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045387"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-045387
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-045387: exit status 85 (89.747314ms)

                                                
                                                
-- stdout --
	* Profile "addons-045387" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045387"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (153.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-045387 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-045387 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m33.689037845s)
--- PASS: TestAddons/Setup (153.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 61.893768ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-s859s" [c35fa1d5-ca7d-47a9-a8fc-3283888ffb9f] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00488783s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2qgkd" [f67d4658-db21-4c23-ac2d-b54c5dd26372] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005488735s
addons_test.go:339: (dbg) Run:  kubectl --context addons-045387 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-045387 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-045387 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.439205974s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 ip
2023/12/18 23:34:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-44pqt" [a0b3ef14-bd83-424e-a8b2-0dd226726530] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005109187s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-045387
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-045387: (5.922915701s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 8.078232ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-q4g7r" [708e46c7-d95c-4afb-b32e-fbd121bb9051] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005069934s
addons_test.go:414: (dbg) Run:  kubectl --context addons-045387 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 64.126266ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-045387 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-045387 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3b289a4c-ccd5-45f7-a1d2-af04ba52d602] Pending
helpers_test.go:344: "task-pv-pod" [3b289a4c-ccd5-45f7-a1d2-af04ba52d602] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3b289a4c-ccd5-45f7-a1d2-af04ba52d602] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005356444s
addons_test.go:583: (dbg) Run:  kubectl --context addons-045387 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-045387 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-045387 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-045387 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-045387 delete pod task-pv-pod: (1.022670497s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-045387 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-045387 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-045387 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [de41caa5-aa58-4e0b-afde-c911fff6ad5d] Pending
helpers_test.go:344: "task-pv-pod-restore" [de41caa5-aa58-4e0b-afde-c911fff6ad5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [de41caa5-aa58-4e0b-afde-c911fff6ad5d] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005717101s
addons_test.go:625: (dbg) Run:  kubectl --context addons-045387 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-045387 delete pod task-pv-pod-restore: (1.040203446s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-045387 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-045387 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-045387 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.889179592s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-045387 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-045387 --alsologtostderr -v=1: (1.497902959s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-qtmcx" [8566bbd1-9365-4801-855d-412da996278a] Pending
helpers_test.go:344: "headlamp-777fd4b855-qtmcx" [8566bbd1-9365-4801-855d-412da996278a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-qtmcx" [8566bbd1-9365-4801-855d-412da996278a] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003482853s
--- PASS: TestAddons/parallel/Headlamp (11.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-brfdr" [9062c9f4-0a3e-4dac-9ec5-f5d5f2114c8d] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003468861s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-045387
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-045387 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-045387 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ddf276a7-c246-4d87-9cf6-c32ea5eb99bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ddf276a7-c246-4d87-9cf6-c32ea5eb99bd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ddf276a7-c246-4d87-9cf6-c32ea5eb99bd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004138133s
addons_test.go:890: (dbg) Run:  kubectl --context addons-045387 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 ssh "cat /opt/local-path-provisioner/pvc-690c8abb-703f-4bf8-a4e3-6af75a8294fd_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-045387 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-045387 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-045387 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-045387 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.258004236s)
--- PASS: TestAddons/parallel/LocalPath (53.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8964k" [17d014f3-90d2-4166-8677-f54ffc3a0687] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00436886s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-045387
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-045387 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-045387 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-045387
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-045387: (11.957153946s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-045387
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-045387
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-045387
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestCertOptions (34.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-564949 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-564949 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.109607746s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-564949 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-564949 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-564949 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-564949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-564949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-564949: (2.068427017s)
--- PASS: TestCertOptions (34.94s)

                                                
                                    
x
+
TestCertExpiration (254.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-487253 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1219 00:12:38.752514  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-487253 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.022953567s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-487253 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-487253 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (29.696921979s)
helpers_test.go:175: Cleaning up "cert-expiration-487253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-487253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-487253: (4.665407315s)
--- PASS: TestCertExpiration (254.39s)

                                                
                                    
x
+
TestForceSystemdFlag (41.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-003729 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-003729 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.582046925s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-003729 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-003729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-003729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-003729: (2.919502191s)
--- PASS: TestForceSystemdFlag (41.02s)

                                                
                                    
x
+
TestForceSystemdEnv (43.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-038626 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-038626 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.297748119s)
helpers_test.go:175: Cleaning up "force-systemd-env-038626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-038626
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-038626: (2.64264237s)
--- PASS: TestForceSystemdEnv (43.94s)

                                                
                                    
x
+
TestErrorSpam/setup (31.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-070903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070903 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-070903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070903 --driver=docker  --container-runtime=crio: (31.38791128s)
--- PASS: TestErrorSpam/setup (31.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 pause
--- PASS: TestErrorSpam/pause (1.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 stop: (1.244207906s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070903 --log_dir /tmp/nospam-070903 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17822-812008/.minikube/files/etc/test/nested/copy/817378/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1218 23:39:35.705364  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:35.711930  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:35.722561  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:35.742798  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:35.783058  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:35.863332  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:36.023795  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:36.344436  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:36.985305  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:38.265555  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:40.825749  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:45.946385  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:39:56.187281  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-348956 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.164414971s)
--- PASS: TestFunctional/serial/StartWithProxy (52.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --alsologtostderr -v=8
E1218 23:40:16.667466  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-348956 --alsologtostderr -v=8: (38.900702856s)
functional_test.go:659: soft start took 38.910130638s for "functional-348956" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-348956 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:3.1: (1.295657154s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:3.3: (1.317087086s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 cache add registry.k8s.io/pause:latest: (1.276271813s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-348956 /tmp/TestFunctionalserialCacheCmdcacheadd_local1780324919/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache add minikube-local-cache-test:functional-348956
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache delete minikube-local-cache-test:functional-348956
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-348956
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (386.731715ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 cache reload: (1.149359976s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 kubectl -- --context functional-348956 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-348956 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 23:40:57.628103  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-348956 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.391871334s)
functional_test.go:757: restart took 33.391966488s for "functional-348956" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-348956 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 logs: (1.850324455s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 logs --file /tmp/TestFunctionalserialLogsFileCmd3688282343/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 logs --file /tmp/TestFunctionalserialLogsFileCmd3688282343/001/logs.txt: (1.975965195s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-348956 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-348956
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-348956: exit status 115 (757.599537ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31038 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-348956 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 config get cpus: exit status 14 (87.80383ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 config get cpus: exit status 14 (104.57325ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-348956 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-348956 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 842520: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-348956 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (255.460075ms)

                                                
                                                
-- stdout --
	* [functional-348956] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:42:14.881039  842110 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:42:14.881174  842110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:14.881184  842110 out.go:309] Setting ErrFile to fd 2...
	I1218 23:42:14.881190  842110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:14.881566  842110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:42:14.881975  842110 out.go:303] Setting JSON to false
	I1218 23:42:14.882962  842110 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15877,"bootTime":1702927058,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:42:14.883061  842110 start.go:138] virtualization:  
	I1218 23:42:14.885366  842110 out.go:177] * [functional-348956] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 23:42:14.887986  842110 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:42:14.889737  842110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:42:14.888131  842110 notify.go:220] Checking for updates...
	I1218 23:42:14.895640  842110 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:42:14.897758  842110 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:42:14.899658  842110 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:42:14.901308  842110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:42:14.903578  842110 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:42:14.904230  842110 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:42:14.939005  842110 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:42:14.939163  842110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:42:15.050625  842110 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 23:42:15.036511549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:42:15.050738  842110 docker.go:295] overlay module found
	I1218 23:42:15.053867  842110 out.go:177] * Using the docker driver based on existing profile
	I1218 23:42:15.055525  842110 start.go:298] selected driver: docker
	I1218 23:42:15.055556  842110 start.go:902] validating driver "docker" against &{Name:functional-348956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-348956 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:42:15.055748  842110 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:42:15.058242  842110 out.go:177] 
	W1218 23:42:15.060170  842110 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 23:42:15.061821  842110 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-348956 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-348956 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (227.759019ms)

                                                
                                                
-- stdout --
	* [functional-348956] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:42:14.656780  842070 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:42:14.657076  842070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:14.657110  842070 out.go:309] Setting ErrFile to fd 2...
	I1218 23:42:14.657132  842070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:42:14.658039  842070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:42:14.658463  842070 out.go:303] Setting JSON to false
	I1218 23:42:14.659396  842070 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15877,"bootTime":1702927058,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1218 23:42:14.659493  842070 start.go:138] virtualization:  
	I1218 23:42:14.661795  842070 out.go:177] * [functional-348956] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1218 23:42:14.664068  842070 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 23:42:14.665667  842070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 23:42:14.664259  842070 notify.go:220] Checking for updates...
	I1218 23:42:14.669294  842070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1218 23:42:14.671164  842070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1218 23:42:14.672817  842070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 23:42:14.674537  842070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 23:42:14.676692  842070 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:42:14.677206  842070 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 23:42:14.702446  842070 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 23:42:14.702577  842070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:42:14.790935  842070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 23:42:14.780064217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:42:14.791034  842070 docker.go:295] overlay module found
	I1218 23:42:14.793220  842070 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1218 23:42:14.795215  842070 start.go:298] selected driver: docker
	I1218 23:42:14.795236  842070 start.go:902] validating driver "docker" against &{Name:functional-348956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-348956 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 23:42:14.795370  842070 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 23:42:14.801131  842070 out.go:177] 
	W1218 23:42:14.804886  842070 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 23:42:14.806404  842070 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-348956 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-348956 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qxnkk" [deb30a9f-fc7d-430d-95a6-8b012c6280e7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-qxnkk" [deb30a9f-fc7d-430d-95a6-8b012c6280e7] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003380186s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31682
functional_test.go:1674: http://192.168.49.2:31682: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-qxnkk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31682
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f256a7de-b345-4417-aabf-803a7b3f711e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004551959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-348956 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-348956 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-348956 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-348956 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e2f9e360-0336-48e6-8527-a257dc989152] Pending
helpers_test.go:344: "sp-pod" [e2f9e360-0336-48e6-8527-a257dc989152] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e2f9e360-0336-48e6-8527-a257dc989152] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004007147s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-348956 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-348956 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-348956 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ae5beee2-d525-44f7-9540-5c001c1c2ed8] Pending
helpers_test.go:344: "sp-pod" [ae5beee2-d525-44f7-9540-5c001c1c2ed8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ae5beee2-d525-44f7-9540-5c001c1c2ed8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005500986s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-348956 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh -n functional-348956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cp functional-348956:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3059972739/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh -n functional-348956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh -n functional-348956 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/817378/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /etc/test/nested/copy/817378/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/817378.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /etc/ssl/certs/817378.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/817378.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /usr/share/ca-certificates/817378.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/8173782.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /etc/ssl/certs/8173782.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/8173782.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /usr/share/ca-certificates/8173782.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-348956 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh "sudo systemctl is-active docker": exit status 1 (560.193202ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh "sudo systemctl is-active containerd": exit status 1 (431.194133ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 840187: os: process already finished
helpers_test.go:502: unable to terminate pid 840015: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-348956 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a5ad9256-ef6e-4c33-80be-6d4711afc717] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a5ad9256-ef6e-4c33-80be-6d4711afc717] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004239212s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-348956 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.20.207 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-348956 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-348956 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-348956 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-6hdn7" [c04d1b65-d18f-4df3-aa75-91099fa2ac83] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-6hdn7" [c04d1b65-d18f-4df3-aa75-91099fa2ac83] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005909964s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "375.035466ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "70.225071ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "364.24466ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "79.482172ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdany-port220403257/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702942929409897285" to /tmp/TestFunctionalparallelMountCmdany-port220403257/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702942929409897285" to /tmp/TestFunctionalparallelMountCmdany-port220403257/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702942929409897285" to /tmp/TestFunctionalparallelMountCmdany-port220403257/001/test-1702942929409897285
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.863566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 23:42 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 23:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 23:42 test-1702942929409897285
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh cat /mount-9p/test-1702942929409897285
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-348956 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dee99d36-1e3f-484c-9cd5-9668cdd8d73c] Pending
helpers_test.go:344: "busybox-mount" [dee99d36-1e3f-484c-9cd5-9668cdd8d73c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dee99d36-1e3f-484c-9cd5-9668cdd8d73c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dee99d36-1e3f-484c-9cd5-9668cdd8d73c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006544857s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-348956 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdany-port220403257/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service list -o json
functional_test.go:1493: Took "640.586694ms" to run "out/minikube-linux-arm64 -p functional-348956 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30199
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30199
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdspecific-port1023366218/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (638.815935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdspecific-port1023366218/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E1218 23:42:19.549148  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh "sudo umount -f /mount-9p": exit status 1 (354.608946ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-348956 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdspecific-port1023366218/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T" /mount1: (1.269072149s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-348956 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-348956 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3620426584/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 version -o=json --components: (1.059034539s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-348956 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-348956
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-348956 image ls --format short --alsologtostderr:
I1218 23:42:45.276924  844585 out.go:296] Setting OutFile to fd 1 ...
I1218 23:42:45.277283  844585 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.277316  844585 out.go:309] Setting ErrFile to fd 2...
I1218 23:42:45.277336  844585 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.277726  844585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
I1218 23:42:45.279289  844585 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.279578  844585 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.280191  844585 cli_runner.go:164] Run: docker container inspect functional-348956 --format={{.State.Status}}
I1218 23:42:45.307852  844585 ssh_runner.go:195] Run: systemctl --version
I1218 23:42:45.307927  844585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348956
I1218 23:42:45.341883  844585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/functional-348956/id_rsa Username:docker}
I1218 23:42:45.465332  844585 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-348956 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 5628e5ea3c17f | 196MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | f09fc93534f6a | 45.3MB |
| gcr.io/google-containers/addon-resizer  | functional-348956  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-348956 image ls --format table --alsologtostderr:
I1218 23:42:45.650661  844644 out.go:296] Setting OutFile to fd 1 ...
I1218 23:42:45.650854  844644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.650880  844644 out.go:309] Setting ErrFile to fd 2...
I1218 23:42:45.650901  844644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.651233  844644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
I1218 23:42:45.652056  844644 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.652306  844644 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.652956  844644 cli_runner.go:164] Run: docker container inspect functional-348956 --format={{.State.Status}}
I1218 23:42:45.682380  844644 ssh_runner.go:195] Run: systemctl --version
I1218 23:42:45.682433  844644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348956
I1218 23:42:45.711233  844644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/functional-348956/id_rsa Username:docker}
I1218 23:42:45.823032  844644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-348956 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags"
:["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-348956"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d
404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"2475
62353"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab"],"repoTags":["docker.io/library/nginx:latest"],"size":"196211465"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdce
a550c33bedfdc92e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repo
Tags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":["docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7","docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45281593"},{"id":"72565bf5bbedfb62e9d21af
a2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-348956 image ls --format json --alsologtostderr:
I1218 23:42:45.622170  844639 out.go:296] Setting OutFile to fd 1 ...
I1218 23:42:45.622313  844639 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.622323  844639 out.go:309] Setting ErrFile to fd 2...
I1218 23:42:45.622329  844639 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.622583  844639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
I1218 23:42:45.624101  844639 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.624325  844639 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.625011  844639 cli_runner.go:164] Run: docker container inspect functional-348956 --format={{.State.Status}}
I1218 23:42:45.664542  844639 ssh_runner.go:195] Run: systemctl --version
I1218 23:42:45.664597  844639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348956
I1218 23:42:45.684661  844639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/functional-348956/id_rsa Username:docker}
I1218 23:42:45.793991  844639 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-348956 image ls --format yaml --alsologtostderr:
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab
repoTags:
- docker.io/library/nginx:latest
size: "196211465"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-348956
size: "34114467"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests:
- docker.io/library/nginx@sha256:18d2bb20c22e511b92a3ec81f553edfcaeeb74fd1c96a92c56a6c4252c75eec7
- docker.io/library/nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc
repoTags:
- docker.io/library/nginx:alpine
size: "45281593"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-348956 image ls --format yaml --alsologtostderr:
I1218 23:42:45.247723  844586 out.go:296] Setting OutFile to fd 1 ...
I1218 23:42:45.248015  844586 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.248027  844586 out.go:309] Setting ErrFile to fd 2...
I1218 23:42:45.248035  844586 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:45.248372  844586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
I1218 23:42:45.249240  844586 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.249448  844586 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:45.250242  844586 cli_runner.go:164] Run: docker container inspect functional-348956 --format={{.State.Status}}
I1218 23:42:45.281643  844586 ssh_runner.go:195] Run: systemctl --version
I1218 23:42:45.281707  844586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348956
I1218 23:42:45.317183  844586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/functional-348956/id_rsa Username:docker}
I1218 23:42:45.434609  844586 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-348956 ssh pgrep buildkitd: exit status 1 (338.251795ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image build -t localhost/my-image:functional-348956 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image build -t localhost/my-image:functional-348956 testdata/build --alsologtostderr: (2.179325178s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-348956 image build -t localhost/my-image:functional-348956 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 24c1252a31e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-348956
--> a13a32b7df3
Successfully tagged localhost/my-image:functional-348956
a13a32b7df3be6135680ad759b91c23f46dc1aae4310185f21fc1e69303e12bd
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-348956 image build -t localhost/my-image:functional-348956 testdata/build --alsologtostderr:
I1218 23:42:46.275914  844745 out.go:296] Setting OutFile to fd 1 ...
I1218 23:42:46.276775  844745 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:46.276789  844745 out.go:309] Setting ErrFile to fd 2...
I1218 23:42:46.276795  844745 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 23:42:46.277107  844745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
I1218 23:42:46.277869  844745 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:46.278499  844745 config.go:182] Loaded profile config "functional-348956": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1218 23:42:46.279111  844745 cli_runner.go:164] Run: docker container inspect functional-348956 --format={{.State.Status}}
I1218 23:42:46.297478  844745 ssh_runner.go:195] Run: systemctl --version
I1218 23:42:46.297538  844745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-348956
I1218 23:42:46.321306  844745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/functional-348956/id_rsa Username:docker}
I1218 23:42:46.421949  844745 build_images.go:151] Building image from path: /tmp/build.667134206.tar
I1218 23:42:46.422030  844745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 23:42:46.432669  844745 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.667134206.tar
I1218 23:42:46.437419  844745 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.667134206.tar: stat -c "%s %y" /var/lib/minikube/build/build.667134206.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.667134206.tar': No such file or directory
I1218 23:42:46.437449  844745 ssh_runner.go:362] scp /tmp/build.667134206.tar --> /var/lib/minikube/build/build.667134206.tar (3072 bytes)
I1218 23:42:46.466892  844745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.667134206
I1218 23:42:46.478009  844745 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.667134206 -xf /var/lib/minikube/build/build.667134206.tar
I1218 23:42:46.489285  844745 crio.go:297] Building image: /var/lib/minikube/build/build.667134206
I1218 23:42:46.489387  844745 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-348956 /var/lib/minikube/build/build.667134206 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1218 23:42:48.353274  844745 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-348956 /var/lib/minikube/build/build.667134206 --cgroup-manager=cgroupfs: (1.863853392s)
I1218 23:42:48.353357  844745 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.667134206
I1218 23:42:48.364086  844745 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.667134206.tar
I1218 23:42:48.374904  844745 build_images.go:207] Built localhost/my-image:functional-348956 from /tmp/build.667134206.tar
I1218 23:42:48.374937  844745 build_images.go:123] succeeded building to: functional-348956
I1218 23:42:48.374943  844745 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.399359148s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-348956
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr
2023/12/18 23:42:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr: (4.703127898s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr: (3.132191479s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.421970996s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-348956
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image load --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr: (3.664622038s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image save gcr.io/google-containers/addon-resizer:functional-348956 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image save gcr.io/google-containers/addon-resizer:functional-348956 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.005009585s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image rm gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-348956 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.044108211s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-348956
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-348956 image save --daemon gcr.io/google-containers/addon-resizer:functional-348956 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-348956
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-348956
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-348956
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-348956
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-715187 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-715187 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.10049889s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons enable ingress --alsologtostderr -v=5
E1218 23:44:35.701643  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons enable ingress --alsologtostderr -v=5: (10.436515616s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-715187 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-672176 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1218 23:48:03.493923  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-672176 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.752715684s)
--- PASS: TestJSONOutput/start/Command (49.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-672176 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-672176 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-672176 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-672176 --output=json --user=testUser: (5.912820288s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-650588 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-650588 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (109.149538ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a77d909-efed-4570-ae73-2a0e3d7bc0e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-650588] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"01ad5de8-c069-4c5d-9ad5-1255427dc061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"700e13df-e97e-4d4a-ad2c-cf4e4c25bdcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd5bf174-5e4e-4ebc-9231-f4e5b12151a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig"}}
	{"specversion":"1.0","id":"bfe1cd31-dd3d-4ff6-9ba0-8344130e4c8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube"}}
	{"specversion":"1.0","id":"fe0cc715-7878-4d72-a6df-f61c34737e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e6b09f74-feba-4100-80ff-323b6affe5bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02c5b8aa-e66d-4791-9e95-7b13c582d042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-650588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-650588
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (47.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-385537 --network=
E1218 23:49:25.415568  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-385537 --network=: (45.611605576s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-385537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-385537
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-385537: (2.138727438s)
--- PASS: TestKicCustomNetwork/create_custom_network (47.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-916358 --network=bridge
E1218 23:49:35.701682  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:49:39.921135  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:39.926329  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:39.937248  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:39.957517  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:39.997791  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:40.078154  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:40.238692  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:40.559296  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:41.200337  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:42.481312  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:45.042821  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:49:50.163051  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1218 23:50:00.403314  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-916358 --network=bridge: (35.336932225s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-916358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-916358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-916358: (2.01388478s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.38s)

                                                
                                    
x
+
TestKicExistingNetwork (35.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-425437 --network=existing-network
E1218 23:50:20.883775  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-425437 --network=existing-network: (33.072016577s)
helpers_test.go:175: Cleaning up "existing-network-425437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-425437
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-425437: (2.128777515s)
--- PASS: TestKicExistingNetwork (35.36s)

                                                
                                    
x
+
TestKicCustomSubnet (36.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-541592 --subnet=192.168.60.0/24
E1218 23:51:01.844043  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-541592 --subnet=192.168.60.0/24: (34.03876368s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-541592 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-541592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-541592
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-541592: (2.14882522s)
--- PASS: TestKicCustomSubnet (36.21s)

                                                
                                    
x
+
TestKicStaticIP (32.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-996049 --static-ip=192.168.200.200
E1218 23:51:41.572824  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-996049 --static-ip=192.168.200.200: (29.749144295s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-996049 ip
helpers_test.go:175: Cleaning up "static-ip-996049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-996049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-996049: (2.161522455s)
--- PASS: TestKicStaticIP (32.09s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-337084 --driver=docker  --container-runtime=crio
E1218 23:52:09.255819  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1218 23:52:23.764335  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-337084 --driver=docker  --container-runtime=crio: (32.189699925s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-339636 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-339636 --driver=docker  --container-runtime=crio: (31.620944011s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-337084
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-339636
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-339636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-339636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-339636: (2.053962257s)
helpers_test.go:175: Cleaning up "first-337084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-337084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-337084: (2.323712981s)
--- PASS: TestMinikubeProfile (69.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-528112 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-528112 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.584547632s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-528112 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-530270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-530270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.599621517s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530270 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-528112 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-528112 --alsologtostderr -v=5: (1.674505349s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530270 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-530270
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-530270: (1.220793331s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-530270
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-530270: (7.073385395s)
--- PASS: TestMountStart/serial/RestartStopped (8.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530270 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-320272 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1218 23:54:35.701419  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:54:39.919932  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-320272 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.480695388s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
E1218 23:55:07.605252  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-320272 -- rollout status deployment/busybox: (2.750002012s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-9rw5h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-320272 -- exec busybox-5bc68d56bd-tdcv5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-320272 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-320272 -v 3 --alsologtostderr: (22.958168485s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-320272 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp testdata/cp-test.txt multinode-320272:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile630891656/001/cp-test_multinode-320272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272:/home/docker/cp-test.txt multinode-320272-m02:/home/docker/cp-test_multinode-320272_multinode-320272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test_multinode-320272_multinode-320272-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272:/home/docker/cp-test.txt multinode-320272-m03:/home/docker/cp-test_multinode-320272_multinode-320272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test_multinode-320272_multinode-320272-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp testdata/cp-test.txt multinode-320272-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile630891656/001/cp-test_multinode-320272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m02:/home/docker/cp-test.txt multinode-320272:/home/docker/cp-test_multinode-320272-m02_multinode-320272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test_multinode-320272-m02_multinode-320272.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m02:/home/docker/cp-test.txt multinode-320272-m03:/home/docker/cp-test_multinode-320272-m02_multinode-320272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test_multinode-320272-m02_multinode-320272-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp testdata/cp-test.txt multinode-320272-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile630891656/001/cp-test_multinode-320272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m03:/home/docker/cp-test.txt multinode-320272:/home/docker/cp-test_multinode-320272-m03_multinode-320272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272 "sudo cat /home/docker/cp-test_multinode-320272-m03_multinode-320272.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 cp multinode-320272-m03:/home/docker/cp-test.txt multinode-320272-m02:/home/docker/cp-test_multinode-320272-m03_multinode-320272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 ssh -n multinode-320272-m02 "sudo cat /home/docker/cp-test_multinode-320272-m03_multinode-320272-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-320272 node stop m03: (1.254532937s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-320272 status: exit status 7 (592.049902ms)

                                                
                                                
-- stdout --
	multinode-320272
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320272-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320272-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr: exit status 7 (602.909812ms)

                                                
                                                
-- stdout --
	multinode-320272
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-320272-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-320272-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:55:54.812980  891122 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:55:54.813267  891122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:55:54.813295  891122 out.go:309] Setting ErrFile to fd 2...
	I1218 23:55:54.813315  891122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:55:54.813637  891122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:55:54.814051  891122 out.go:303] Setting JSON to false
	I1218 23:55:54.814216  891122 notify.go:220] Checking for updates...
	I1218 23:55:54.814988  891122 mustload.go:65] Loading cluster: multinode-320272
	I1218 23:55:54.815510  891122 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:55:54.815551  891122 status.go:255] checking status of multinode-320272 ...
	I1218 23:55:54.816188  891122 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:55:54.839053  891122 status.go:330] multinode-320272 host status = "Running" (err=<nil>)
	I1218 23:55:54.839095  891122 host.go:66] Checking if "multinode-320272" exists ...
	I1218 23:55:54.839398  891122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272
	I1218 23:55:54.865752  891122 host.go:66] Checking if "multinode-320272" exists ...
	I1218 23:55:54.866094  891122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:55:54.866145  891122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272
	I1218 23:55:54.893947  891122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272/id_rsa Username:docker}
	I1218 23:55:54.998501  891122 ssh_runner.go:195] Run: systemctl --version
	I1218 23:55:55.010782  891122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:55:55.026639  891122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 23:55:55.103058  891122 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-18 23:55:55.091171532 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 23:55:55.103711  891122 kubeconfig.go:92] found "multinode-320272" server: "https://192.168.58.2:8443"
	I1218 23:55:55.103729  891122 api_server.go:166] Checking apiserver status ...
	I1218 23:55:55.103774  891122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 23:55:55.118160  891122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	I1218 23:55:55.131401  891122 api_server.go:182] apiserver freezer: "13:freezer:/docker/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/crio/crio-23b1a0fd5ac602c08d2664dd79431850c04ed2bec82b5159b5c8f87489bd4516"
	I1218 23:55:55.131476  891122 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/71070d7623c10070af469b551f3bbb0dab5f1c71cb21c117a4669e81f8474851/crio/crio-23b1a0fd5ac602c08d2664dd79431850c04ed2bec82b5159b5c8f87489bd4516/freezer.state
	I1218 23:55:55.142628  891122 api_server.go:204] freezer state: "THAWED"
	I1218 23:55:55.142656  891122 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1218 23:55:55.151659  891122 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1218 23:55:55.151690  891122 status.go:421] multinode-320272 apiserver status = Running (err=<nil>)
	I1218 23:55:55.151703  891122 status.go:257] multinode-320272 status: &{Name:multinode-320272 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:55:55.151726  891122 status.go:255] checking status of multinode-320272-m02 ...
	I1218 23:55:55.152310  891122 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Status}}
	I1218 23:55:55.171488  891122 status.go:330] multinode-320272-m02 host status = "Running" (err=<nil>)
	I1218 23:55:55.171516  891122 host.go:66] Checking if "multinode-320272-m02" exists ...
	I1218 23:55:55.171853  891122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-320272-m02
	I1218 23:55:55.191606  891122 host.go:66] Checking if "multinode-320272-m02" exists ...
	I1218 23:55:55.191910  891122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 23:55:55.191990  891122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-320272-m02
	I1218 23:55:55.209636  891122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33521 SSHKeyPath:/home/jenkins/minikube-integration/17822-812008/.minikube/machines/multinode-320272-m02/id_rsa Username:docker}
	I1218 23:55:55.310269  891122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 23:55:55.323875  891122 status.go:257] multinode-320272-m02 status: &{Name:multinode-320272-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:55:55.323908  891122 status.go:255] checking status of multinode-320272-m03 ...
	I1218 23:55:55.324241  891122 cli_runner.go:164] Run: docker container inspect multinode-320272-m03 --format={{.State.Status}}
	I1218 23:55:55.343631  891122 status.go:330] multinode-320272-m03 host status = "Stopped" (err=<nil>)
	I1218 23:55:55.343655  891122 status.go:343] host is not running, skipping remaining checks
	I1218 23:55:55.343663  891122 status.go:257] multinode-320272-m03 status: &{Name:multinode-320272-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 node start m03 --alsologtostderr
E1218 23:55:58.752311  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-320272 node start m03 --alsologtostderr: (11.756699208s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (122.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-320272
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-320272
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-320272: (24.914810355s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-320272 --wait=true -v=8 --alsologtostderr
E1218 23:56:41.573090  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-320272 --wait=true -v=8 --alsologtostderr: (1m37.207094005s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-320272
--- PASS: TestMultiNode/serial/RestartKeepsNodes (122.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-320272 node delete m03: (4.413468455s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-320272 stop: (23.779970704s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-320272 status: exit status 7 (103.584074ms)

                                                
                                                
-- stdout --
	multinode-320272
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-320272-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr: exit status 7 (106.885366ms)

                                                
                                                
-- stdout --
	multinode-320272
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-320272-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:58:39.433499  899281 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:58:39.433687  899281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:58:39.433717  899281 out.go:309] Setting ErrFile to fd 2...
	I1218 23:58:39.433737  899281 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:58:39.433997  899281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1218 23:58:39.434196  899281 out.go:303] Setting JSON to false
	I1218 23:58:39.434299  899281 mustload.go:65] Loading cluster: multinode-320272
	I1218 23:58:39.434373  899281 notify.go:220] Checking for updates...
	I1218 23:58:39.434748  899281 config.go:182] Loaded profile config "multinode-320272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1218 23:58:39.434783  899281 status.go:255] checking status of multinode-320272 ...
	I1218 23:58:39.435589  899281 cli_runner.go:164] Run: docker container inspect multinode-320272 --format={{.State.Status}}
	I1218 23:58:39.454055  899281 status.go:330] multinode-320272 host status = "Stopped" (err=<nil>)
	I1218 23:58:39.454094  899281 status.go:343] host is not running, skipping remaining checks
	I1218 23:58:39.454102  899281 status.go:257] multinode-320272 status: &{Name:multinode-320272 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:58:39.454126  899281 status.go:255] checking status of multinode-320272-m02 ...
	I1218 23:58:39.454421  899281 cli_runner.go:164] Run: docker container inspect multinode-320272-m02 --format={{.State.Status}}
	I1218 23:58:39.472983  899281 status.go:330] multinode-320272-m02 host status = "Stopped" (err=<nil>)
	I1218 23:58:39.473014  899281 status.go:343] host is not running, skipping remaining checks
	I1218 23:58:39.473022  899281 status.go:257] multinode-320272-m02 status: &{Name:multinode-320272-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-320272 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1218 23:59:35.701611  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1218 23:59:39.920292  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-320272 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m22.015815124s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-320272 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-320272
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-320272-m02 --driver=docker  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-320272-m02 --driver=docker  --container-runtime=crio: exit status 14 (108.226023ms)

                                                
                                                
-- stdout --
	* [multinode-320272-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-320272-m02' is duplicated with machine name 'multinode-320272-m02' in profile 'multinode-320272'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-320272-m03 --driver=docker  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-320272-m03 --driver=docker  --container-runtime=crio: (34.405113674s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-320272
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-320272: exit status 80 (401.42562ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-320272
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-320272-m03 already exists in multinode-320272-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-320272-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-320272-m03: (2.053753891s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.04s)

                                                
                                    
x
+
TestPreload (172.86s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-559500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1219 00:01:41.572740  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-559500 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.439120261s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-559500 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-559500 image pull gcr.io/k8s-minikube/busybox: (1.940709459s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-559500
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-559500: (5.832827935s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-559500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1219 00:03:04.616771  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-559500 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m18.010187855s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-559500 image list
helpers_test.go:175: Cleaning up "test-preload-559500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-559500
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-559500: (2.361571305s)
--- PASS: TestPreload (172.86s)

                                                
                                    
x
+
TestScheduledStopUnix (110.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-984626 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-984626 --memory=2048 --driver=docker  --container-runtime=crio: (33.602323494s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984626 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-984626 -n scheduled-stop-984626
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984626 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984626 --cancel-scheduled
E1219 00:04:35.702087  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984626 -n scheduled-stop-984626
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-984626
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-984626 --schedule 15s
E1219 00:04:39.920280  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-984626
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-984626: exit status 7 (86.210834ms)

                                                
                                                
-- stdout --
	scheduled-stop-984626
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984626 -n scheduled-stop-984626
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-984626 -n scheduled-stop-984626: exit status 7 (85.655212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-984626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-984626
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-984626: (5.012195391s)
--- PASS: TestScheduledStopUnix (110.49s)

                                                
                                    
x
+
TestInsufficientStorage (11.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-586178 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-586178 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.841185597s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a478b3e4-6200-4dbd-82d2-6111bb27658e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-586178] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a33b00b9-9a2f-4136-9a22-02e2578c6b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"d3f40356-fbc5-4dd3-8bec-2538602af68e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75b55420-21f5-47b1-b3d8-d96b9f1280bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig"}}
	{"specversion":"1.0","id":"606229aa-145e-4482-9896-cea42b5ec887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube"}}
	{"specversion":"1.0","id":"d64098f8-1f4b-4652-a798-6314e9f88ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"043d9ca9-c33c-420e-a675-5998c92fdadb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"173aea30-a380-4ff5-b0f4-432275383bad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b361dcb1-e60d-48dc-85fe-01a46189c8da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e2d3269b-722a-41e0-ada3-8496212669bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bfc027a-75e3-4132-9599-a2dd5d9ca685","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ec27a056-89ef-4f20-91be-8db75ec8318a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-586178 in cluster insufficient-storage-586178","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5f0d843-5ff5-440d-b282-88b9ca14dc07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702920864-17822 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd4417d4-7f36-40f7-8ac8-a963c099330b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8818759-fc6a-4631-a30a-030ec5ff862d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-586178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-586178 --output=json --layout=cluster: exit status 7 (336.189648ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-586178","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-586178","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 00:05:38.690308  915814 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-586178" does not appear in /home/jenkins/minikube-integration/17822-812008/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-586178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-586178 --output=json --layout=cluster: exit status 7 (332.102903ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-586178","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-586178","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 00:05:39.022653  915869 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-586178" does not appear in /home/jenkins/minikube-integration/17822-812008/kubeconfig
	E1219 00:05:39.035553  915869 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/insufficient-storage-586178/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-586178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-586178
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-586178: (1.945645372s)
--- PASS: TestInsufficientStorage (11.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (151.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.540048652s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-809520
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-809520: (2.334227616s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-809520 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-809520 status --format={{.Host}}: exit status 7 (86.85408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.835368974s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-809520 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (225.611639ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-809520] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-809520
	    minikube start -p kubernetes-upgrade-809520 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8095202 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-809520 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1219 00:09:35.701295  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:09:39.919867  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-809520 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.548188353s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-809520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-809520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-809520: (2.714146992s)
--- PASS: TestKubernetesUpgrade (151.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.856514ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-879025] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-879025 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-879025 --driver=docker  --container-runtime=crio: (43.760615086s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-879025 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --driver=docker  --container-runtime=crio
E1219 00:06:41.572926  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --driver=docker  --container-runtime=crio: (27.325455234s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-879025 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-879025 status -o json: exit status 2 (387.852548ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-879025","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-879025
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-879025: (2.292751979s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-879025 --no-kubernetes --driver=docker  --container-runtime=crio: (10.202778652s)
--- PASS: TestNoKubernetes/serial/Start (10.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-879025 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-879025 "sudo systemctl is-active --quiet service kubelet": exit status 1 (422.694435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-879025
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-879025: (1.301081612s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-879025 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-879025 --driver=docker  --container-runtime=crio: (8.053301796s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-879025 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-879025 "sudo systemctl is-active --quiet service kubelet": exit status 1 (415.251234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-345510
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestPause/serial/Start (61.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-719849 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-719849 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m1.526832115s)
--- PASS: TestPause/serial/Start (61.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-719849 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1219 00:11:41.572161  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-719849 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.838527795s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-468021 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-468021 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (219.291146ms)

                                                
                                                
-- stdout --
	* [false-468021] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 00:11:54.323242  949384 out.go:296] Setting OutFile to fd 1 ...
	I1219 00:11:54.323365  949384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:11:54.323375  949384 out.go:309] Setting ErrFile to fd 2...
	I1219 00:11:54.323381  949384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 00:11:54.323643  949384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-812008/.minikube/bin
	I1219 00:11:54.324131  949384 out.go:303] Setting JSON to false
	I1219 00:11:54.325192  949384 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17657,"bootTime":1702927058,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1219 00:11:54.325277  949384 start.go:138] virtualization:  
	I1219 00:11:54.327839  949384 out.go:177] * [false-468021] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1219 00:11:54.329831  949384 out.go:177]   - MINIKUBE_LOCATION=17822
	I1219 00:11:54.331596  949384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 00:11:54.329916  949384 notify.go:220] Checking for updates...
	I1219 00:11:54.334895  949384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-812008/kubeconfig
	I1219 00:11:54.336638  949384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-812008/.minikube
	I1219 00:11:54.338481  949384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1219 00:11:54.340341  949384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 00:11:54.342623  949384 config.go:182] Loaded profile config "pause-719849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1219 00:11:54.342757  949384 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 00:11:54.367409  949384 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1219 00:11:54.367578  949384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 00:11:54.458643  949384 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-12-19 00:11:54.448644792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1219 00:11:54.458784  949384 docker.go:295] overlay module found
	I1219 00:11:54.460720  949384 out.go:177] * Using the docker driver based on user configuration
	I1219 00:11:54.462566  949384 start.go:298] selected driver: docker
	I1219 00:11:54.462583  949384 start.go:902] validating driver "docker" against <nil>
	I1219 00:11:54.462597  949384 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 00:11:54.464791  949384 out.go:177] 
	W1219 00:11:54.466536  949384 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1219 00:11:54.468005  949384 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-468021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-719849
contexts:
- context:
cluster: pause-719849
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-719849
name: pause-719849
current-context: pause-719849
kind: Config
preferences: {}
users:
- name: pause-719849
user:
client-certificate: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.crt
client-key: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-468021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-468021"

                                                
                                                
----------------------- debugLogs end: false-468021 [took: 4.094473617s] --------------------------------
helpers_test.go:175: Cleaning up "false-468021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-468021
--- PASS: TestNetworkPlugins/group/false (4.50s)

                                                
                                    
x
+
TestPause/serial/Pause (1.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-719849 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-719849 --alsologtostderr -v=5: (1.256224077s)
--- PASS: TestPause/serial/Pause (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-719849 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-719849 --output=json --layout=cluster: exit status 2 (440.908547ms)

                                                
                                                
-- stdout --
	{"Name":"pause-719849","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-719849","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-719849 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.46s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-719849 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-719849 --alsologtostderr -v=5: (1.458639751s)
--- PASS: TestPause/serial/PauseAgain (1.46s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.45s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-719849 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-719849 --alsologtostderr -v=5: (3.448055194s)
--- PASS: TestPause/serial/DeletePaused (3.45s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-719849
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-719849: exit status 1 (23.537543ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-719849: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-288593 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1219 00:14:35.701153  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:14:39.920753  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-288593 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m17.37373633s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-288593 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [593ab3f3-67cc-472d-a27d-d3ad980e752b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [593ab3f3-67cc-472d-a27d-d3ad980e752b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003563203s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-288593 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-288593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-288593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-288593 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-288593 --alsologtostderr -v=3: (12.156934768s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-288593 -n old-k8s-version-288593
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-288593 -n old-k8s-version-288593: exit status 7 (122.869742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-288593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (452.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-288593 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-288593 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m32.086070289s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-288593 -n old-k8s-version-288593
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (452.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-401441 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1219 00:16:41.572537  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-401441 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m9.90101417s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-401441 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48a587d8-27c5-4cd3-8112-c67d9d8f46c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48a587d8-27c5-4cd3-8112-c67d9d8f46c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003470231s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-401441 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-401441 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-401441 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066698346s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-401441 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-401441 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-401441 --alsologtostderr -v=3: (12.018182223s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-401441 -n no-preload-401441
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-401441 -n no-preload-401441: exit status 7 (89.880463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-401441 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (345.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-401441 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1219 00:19:35.701557  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:19:39.920652  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1219 00:19:44.617320  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1219 00:21:41.572471  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1219 00:22:42.966277  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-401441 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (5m45.046780042s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-401441 -n no-preload-401441
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (345.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-jkjch" [65ba0018-d920-4bfb-9437-a76190afb8bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005181686s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-528tp" [298210ef-280f-4bb6-8938-7e8ab9d37034] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-528tp" [298210ef-280f-4bb6-8938-7e8ab9d37034] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004123886s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-jkjch" [65ba0018-d920-4bfb-9437-a76190afb8bb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004202958s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-288593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-288593 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-288593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-288593 -n old-k8s-version-288593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-288593 -n old-k8s-version-288593: exit status 2 (405.149439ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-288593 -n old-k8s-version-288593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-288593 -n old-k8s-version-288593: exit status 2 (380.235747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-288593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-288593 -n old-k8s-version-288593
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-288593 -n old-k8s-version-288593
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-528tp" [298210ef-280f-4bb6-8938-7e8ab9d37034] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003969321s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-401441 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-187992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-187992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m29.856459499s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-401441 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-401441 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-401441 --alsologtostderr -v=1: (1.317820959s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-401441 -n no-preload-401441
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-401441 -n no-preload-401441: exit status 2 (428.070761ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-401441 -n no-preload-401441
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-401441 -n no-preload-401441: exit status 2 (435.617487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-401441 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-401441 --alsologtostderr -v=1: (1.000207989s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-401441 -n no-preload-401441
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-401441 -n no-preload-401441
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-744475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1219 00:24:35.701689  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:24:39.920038  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-744475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m21.781910695s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-187992 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8f9e302f-9027-42c2-b89a-1306cc061d9b] Pending
helpers_test.go:344: "busybox" [8f9e302f-9027-42c2-b89a-1306cc061d9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8f9e302f-9027-42c2-b89a-1306cc061d9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003442091s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-187992 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-744475 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0073ef84-eb25-464e-8934-8df10d4d7c8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0073ef84-eb25-464e-8934-8df10d4d7c8a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003357647s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-744475 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-187992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-187992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.125929382s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-187992 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-187992 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-187992 --alsologtostderr -v=3: (12.052835734s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-744475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-744475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.124559803s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-744475 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-744475 --alsologtostderr -v=3
E1219 00:25:41.878446  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:41.883748  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:41.895084  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:41.915434  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:41.956138  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:42.037251  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:42.197809  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:42.518382  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:43.159374  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:44.439980  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:25:47.000190  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-744475 --alsologtostderr -v=3: (12.025188227s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-187992 -n embed-certs-187992
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-187992 -n embed-certs-187992: exit status 7 (92.179368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-187992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (348.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-187992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1219 00:25:52.120873  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-187992 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m47.61940227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-187992 -n embed-certs-187992
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (348.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475: exit status 7 (90.552494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-744475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (631.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-744475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1219 00:26:02.361302  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:26:22.841583  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:26:41.572768  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1219 00:27:03.802381  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:27:31.899450  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:31.904762  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:31.915097  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:31.935393  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:31.975662  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:32.056031  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:32.216370  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:32.536596  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:33.176912  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:34.457857  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:37.018107  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:42.142056  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:27:52.382815  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:28:12.863485  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:28:25.723260  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:28:53.824459  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:29:18.752880  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:29:35.701149  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:29:39.920387  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1219 00:30:15.744836  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:30:41.878537  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:31:09.564357  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-744475 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m30.851324842s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (631.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k7qg2" [a0424f8f-5513-4ada-bc70-d012fe520158] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k7qg2" [a0424f8f-5513-4ada-bc70-d012fe520158] Running
E1219 00:31:41.572678  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003459037s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k7qg2" [a0424f8f-5513-4ada-bc70-d012fe520158] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003973082s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-187992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-187992 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-187992 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-187992 --alsologtostderr -v=1: (1.196028494s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-187992 -n embed-certs-187992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-187992 -n embed-certs-187992: exit status 2 (492.388568ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-187992 -n embed-certs-187992
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-187992 -n embed-certs-187992: exit status 2 (533.885211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-187992 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-187992 --alsologtostderr -v=1: (1.505752733s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-187992 -n embed-certs-187992
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-187992 -n embed-certs-187992
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-284538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E1219 00:32:31.899119  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
E1219 00:32:59.585811  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-284538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m0.292280161s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-284538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-284538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.804994176s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-284538 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-284538 --alsologtostderr -v=3: (1.37746935s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-284538 -n newest-cni-284538
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-284538 -n newest-cni-284538: exit status 7 (90.670536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-284538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-284538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-284538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (31.453103453s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-284538 -n newest-cni-284538
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-284538 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-284538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-284538 -n newest-cni-284538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-284538 -n newest-cni-284538: exit status 2 (408.746589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-284538 -n newest-cni-284538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-284538 -n newest-cni-284538: exit status 2 (372.183717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-284538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-284538 -n newest-cni-284538
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-284538 -n newest-cni-284538
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1219 00:34:35.701324  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/addons-045387/client.crt: no such file or directory
E1219 00:34:39.920151  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.46532109s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvbsd" [964ff461-3aa1-4edb-b0ef-a698df137ef5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cvbsd" [964ff461-3aa1-4edb-b0ef-a698df137ef5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003378876s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1219 00:35:41.878486  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.205605738s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jz722" [e6310776-b37c-4aa7-b1b2-d84b416a3683] Running
E1219 00:36:24.618187  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004433645s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jz722" [e6310776-b37c-4aa7-b1b2-d84b416a3683] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003542427s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-744475 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-744475 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-744475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475: exit status 2 (385.160029ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475: exit status 2 (384.14144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-744475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-744475 -n default-k8s-diff-port-744475
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)
E1219 00:41:41.573167  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/functional-348956/client.crt: no such file or directory
E1219 00:41:52.127169  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/default-k8s-diff-port-744475/client.crt: no such file or directory
E1219 00:41:57.235803  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.241141  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.251375  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.271759  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.312131  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.392447  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.552831  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:57.873533  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:58.513719  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:41:59.793952  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:42:02.355143  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:42:04.924629  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/old-k8s-version-288593/client.crt: no such file or directory
E1219 00:42:07.475361  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory
E1219 00:42:17.716075  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/kindnet-468021/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m18.844741822s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pzlc5" [80212f81-96a0-426b-a4fc-2f5fb8f5992d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006393371s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h5kqd" [e05ade84-843f-48b8-936d-f3e10a83567a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h5kqd" [e05ade84-843f-48b8-936d-f3e10a83567a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004897514s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.666450371s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jhh8b" [b177a3ef-63ca-401e-8f29-4a83e7d888e4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005397842s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2frd5" [4c15de48-442b-43dc-adc8-821d885886a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2frd5" [4c15de48-442b-43dc-adc8-821d885886a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005101986s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m28.607970729s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kz85c" [ae3198ee-038c-49a6-9b8b-0214277544af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kz85c" [ae3198ee-038c-49a6-9b8b-0214277544af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006046438s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1219 00:39:39.920252  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/ingress-addon-legacy-715187/client.crt: no such file or directory
E1219 00:40:04.806131  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:04.811361  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:04.821582  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:04.841802  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:04.882019  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:04.962251  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:05.122582  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:05.443072  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:06.084196  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:07.365303  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:09.925921  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
E1219 00:40:15.046872  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.268098891s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-srq47" [cb0fa83d-4dc1-4da1-b0e5-0ac774e618dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-srq47" [cb0fa83d-4dc1-4da1-b0e5-0ac774e618dd] Running
E1219 00:40:25.288052  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/auto-468021/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003782479s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-r9rnp" [cb7d1275-b6b5-4e21-a6c1-9a1187a237a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004727995s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-468021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m31.509851138s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wp6hj" [8746de75-f46a-4803-86f0-4fc3ad4d90f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wp6hj" [8746de75-f46a-4803-86f0-4fc3ad4d90f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004002179s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-468021 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-468021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bpnct" [71f191f0-affa-44b2-9aae-0652f0971c0c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bpnct" [71f191f0-affa-44b2-9aae-0652f0971c0c] Running
E1219 00:42:31.899121  817378 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/no-preload-401441/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004096186s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-468021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-468021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/315)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
aaa_download_only_test.go:102: No preload image
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-062537 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-062537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-062537
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-552228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-552228
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-468021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:22 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-719849
contexts:
- context:
cluster: pause-719849
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:22 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-719849
name: pause-719849
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-719849
user:
client-certificate: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.crt
client-key: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-468021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-468021"

                                                
                                                
----------------------- debugLogs end: kubenet-468021 [took: 5.679020974s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-468021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-468021
--- SKIP: TestNetworkPlugins/group/kubenet (5.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-468021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-468021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17822-812008/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: pause-719849
contexts:
- context:
cluster: pause-719849
extensions:
- extension:
last-update: Tue, 19 Dec 2023 00:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-719849
name: pause-719849
current-context: pause-719849
kind: Config
preferences: {}
users:
- name: pause-719849
user:
client-certificate: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.crt
client-key: /home/jenkins/minikube-integration/17822-812008/.minikube/profiles/pause-719849/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-468021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-468021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-468021"

                                                
                                                
----------------------- debugLogs end: cilium-468021 [took: 6.338668602s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-468021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-468021
--- SKIP: TestNetworkPlugins/group/cilium (6.52s)

                                                
                                    
Copied to clipboard